Cover Photo Major News from GitHub, Microsoft and OpenAI

GitHub Accelerator Fuels Open Source AI Revolution

GitHub has launched its 2024 Accelerator program, providing funding, mentorship, and community support to promising open source projects.  The program aims to empower developers and democratize access to cutting-edge technologies, particularly in the rapidly evolving field of AI. This year’s cohort includes Unsloth, a project focused on making AI model fine-tuning more efficient, and Formbricks, which participated in the 2023 program and revolutionized user feedback with its open source software. GitHub’s investment of $40,000 per project, along with mentorship and community engagement, underscores the company’s commitment to fostering open source innovation and driving technological advancements.  The 2024 program is poised to accelerate the development of groundbreaking technologies with the potential to reshape the software development and AI landscape.

Microsoft and Beihang University Introduce MoRA for Efficient LLM Fine-Tuning

Researchers from Microsoft and Beihang University have unveiled MoRA, a new technique for fine-tuning large language models (LLMs) more efficiently and cost-effectively. MoRA addresses limitations of existing methods like LoRA, particularly in tasks requiring LLMs to acquire new knowledge. While LoRA excels at simple tasks like text classification, its low-rank updating mechanism struggles with more complex applications such as mathematical reasoning and continual pre-training. MoRA overcomes this by employing a square matrix, enabling higher-rank updating and enhanced learning capacity. In tests, MoRA significantly outperformed LoRA in memorization tasks and demonstrated superior performance in continual pre-training for specialized domains like biomedicine and finance. While MoRA matched LoRA’s performance in instruction tuning and mathematical reasoning, increasing its rank further closed the gap with full fine-tuning. Microsoft and Beihang have open-sourced MoRA, making it compatible with existing LoRA tools. This development holds promise for enterprise LLM applications, enabling companies to enhance models with proprietary knowledge and utilize smaller models for complex tasks, ultimately reducing costs and expanding the possibilities of LLM deployment.

OpenAI Begins Training New Frontier AI Model, But Release Delayed for Safety Review

OpenAI has announced the start of training for its next-generation AI model, but its release won’t happen for at least 90 days.  The company has formed a new Safety and Security Committee to evaluate and strengthen its safeguards before the model’s debut. This move comes after a series of controversies for OpenAI, including a public dispute with actress Scarlett Johansson over voice cloning allegations, the departure of key researchers, and criticism over its employee agreements.  The company also faces growing scrutiny over the potential risks of advanced AI. The new committee, composed of OpenAI board members, will have until late August to provide recommendations.  However, some critics have raised concerns about the committee’s lack of independence, as it consists solely of OpenAI executives. OpenAI’s announcement coincides with a broader backlash against generative AI, with Google also facing criticism for its AI-powered search features.  The industry is grappling with balancing innovation and potential risks as AI becomes increasingly powerful and pervasive.