Copilot, to date, is only seen as a programming tool for simple projects to fill in assistive roles
Copilot, a programming tool released by Microsoft, largely uses open-source code available on GitHub to train and provides coding suggestions while writing a program with amazing capabilities ranging from generating just a line of code to a block of code. Its functionalities go beyond just being a simple suggestion tool. A descendant of the GPT-3 models, which has not been successful in a big way, coders are already rejoicing in finding a tool that has the potential to drastically bring down the burden of writing repetitive code. Smaller than GPT-3, they are much more efficient and have greater memory. Unlike its predecessor, Copilot – at least in theory – does not carry with it the ambiguity of whether it would be able to do logical reasoning. GitHub CEO Nat Friedman describes CoPilot as the third revolution of software development.
There is a reason for the excitement
Copilot technology is aimed at improving developer productivity and has been proved quite useful in this respect by internal users. Though better code quality remains far and between, at the moment it is helping newcomers get along with the coding and old-timers learn new languages faster. Ever since the tool has been introduced, there have been apprehensions doing rounds if it is here to replace the human coders. Though at the surface level, the apprehension sounds real, there is no exceptional reason to take it seriously. Holger Mueller, an analyst at Constellation Research in an interview said, “We’ve known for a long time that the world does not have enough developers for the code that needs to be written. It is an overall industry trend. ML is out there and needs to be applied — as often and as well as possible. All the tool vendors are doing it.” Besides, Copilot has proven to be a pretty much useful application for testing. The tech preview page of GitHub mentions clearly, “Tests are the backbone of any robust software engineering project. Import a unit test package and let GitHub Copilot suggest tests that match your implementation code.”
All is not well with Copilot
CoPilot falls short in two respects. Language models are useful when applied to specific tasks but in a broader framework, they fail to perform. And secondly, how big technology companies mint language models, which are basically modeled on open-source code meant for collective advantage. Another dimension for deeper scrutiny is if it can scale up to meet the mainstream programming requirements. To date, it is seen only as a programming tool for simple projects to fill in assistive roles and not one which can save from programming blocks. In an interview, Eric Newcomer, CTO, WSO2 said, “It’s a very interesting idea and should work well for simple examples, but I’d be curious to see how well it will work for complex code problems”.
Does Copilot make a case for AGI?
Artificial intelligence that we aspire for to achieve the standards of AGI is still a matter of debate. The AI we know is more akin to Artificial Narrow Intelligence (ANI) and it needs a drastic makeover to act like AGI. Therefore, it would be futile to look at Copilot’s amazing potential in leapfrogging, through the AGI prism. As of now, it is not even a question whether CoPilot, a descendant of AI would take us to AGI.
More Trending Stories
- Top 10 Cryptocurrencies that are Predicted to Further Fall in 2022
- Metaverse Standards Forum Makes Data Interoperable but Only for Big-Tech
- Why Developers Demand to Kill Github’s Copilot? Is Code Credit That Important?
- Top 10 Deep Neural Network Companies to Lookout for in 2022
- Amazon. Eth Is on Sale for Seven Figures! Cybersquatting In Web3 Is Getting Serious
- Why Do Self-taught Python Developers Can’t Go from Newbie to Programmer?
- Github CEO Wants Passwords Gone! Suggests Link Log In
The post Copilot is an amazing Large Language Model! But it can never take us to AGI appeared first on .