Microsoft Builds a Better AI Using ChatGPT

Picture of Newsdesk

Newsdesk

[vc_row][vc_column][vc_column_text dp_text_size=”size-4″]Microsoft has made a significant breakthrough with its latest language model, Phi-1, which boasts an impressive parameter count of 1.3 billion. In contrast to the traditional belief that bigger models yield better results, Microsoft’s approach prioritizes the quality of training data. By meticulously curating a high-quality dataset akin to a textbook, Phi-1 has achieved remarkable performance, surpassing even GPT-3.5, which incorporates 100 billion parameters.

Utilizing the Transformer architecture and 8 Nvidia A100 GPUs, Microsoft expedited the training process, completing it in just four days. Comparative tests have shown that Phi-1 outperforms GPT-3.5, achieving an impressive accuracy score of 50.6% despite GPT-3.5’s staggering 175 billion parameters. Microsoft plans to open-source Phi-1 on the HuggingFace platform, promoting accessibility and encouraging collaboration.

Also Read: Meta has launched improved parental control options for Instagram and Facebook.

Notably, Microsoft previously developed Orca, a smaller language model with 13 billion parameters, which exhibited superior performance compared to ChatGPT. By challenging the notion that larger stack sizes are necessary for improved performance, Microsoft emphasizes the importance of training data quality. The decision to open source Phi-1 reflects Microsoft’s commitment to pushing the boundaries of natural language processing and driving advancements in the field.[/vc_column_text][/vc_column][/vc_row]

Related News

Trending

Recent News

Type to Search