A.I. Is Mastering Language. Should We Trust What It Says?

But since the fluency of GPT-3 has stunned many observers, the large-language-model approach has also drawn significant criticism over the past few years. Some skeptics argue that the software is only capable of blindly mimicking – that it imitates the synthetic pattern of human language but is unable to generate its own ideas or make complex decisions, a fundamental limitation that would prevent the LLM approach from ever maturing. Human intelligence. For these critics, GPT-3 is just the latest shiny object in the long history of AI hype, channeling the research dollar and focusing on what will ultimately prove to be the ultimate, leaving other promising approaches out of maturity. Other critics believe that software such as GPT-3 always manipulates biases and propaganda and misinformation in the data on which it has been trained, meaning that using it for anything more than parlor tricks will always be irresponsible.

Wherever you get into this discussion, the pace of recent improvements in large language models makes it difficult to imagine that they will not be used commercially in the coming years. And that begs the question of how exactly – and, for that matter, other headlong advances of AI – should be released to the world. In the rise of Facebook and Google, we have seen how dominance in a new field of technology quickly leads to an amazing power over society, and AI threatens to be even more transformative than social media in its ultimate effects. With such promise and potential for abuse, what is the right kind of organization to create and own something of such scale and ambition?

Or should we make it at all?

Origin of OpenAI The date of July 2015, when a small group of tech-world giants gathered for a private dinner at the Rosewood Hotel on Sand Hill Road, the symbolic heart of Silicon Valley. This dinner took place between two recent developments in the world of technology, one positive and one more troublesome. On the one hand, the radical advancement in computational power – and some new breakthroughs in the design of the neural net – created a clear sense of excitement in the field of machine learning; The realization that the long “AI Winter”, the decades in which the field had failed to keep up with its initial publicity, had finally begun to dissolve. A group at the University of Toronto trained a program called Alexnet to identify classes of objects (dogs, castles, tractors, tables) in photographs, with a higher level of accuracy than was previously achieved by any neural net. Google quickly hired Alexnet creators to acquire Deepmind and launch an initiative called Google Brain. Mainstream adoption of intelligent assistants like Siri and Alexa showed that scripted agents can also be a breakout customer hit.

But over the same period, there has been a seismic shift in attitude towards Big Tech, with one-time popular companies such as Google or Facebook being criticized for their near-monopoly powers, for expanding conspiracy theories and for their unpredictable sifting our attention. Towards algorithmic feeds. Long-term fears about the dangers of artificial intelligence were appearing on op-ed pages and the TED stage. Nick Bostrom of Oxford University published his book “Super Intelligence”, which presented a series of scenarios in which advanced AI could deviate from the interests of humanity with potentially disastrous consequences. In late 2014, Stephen Hawking announced to the BBC that “the development of a fully artificial intelligence could spell the end of the human race.” It seemed that the cycle of corporate integration that characterizes the social media age was already happening with AI. This time around, algorithms can’t just plant polarization or sell our attention to the highest bidder – they can only destroy humanity. And again, these would mean that you have to spend for these processes.

The agenda for that July night dinner on Sand Hill Road was nothing short of ambitious: finding the best way to move AI research to the most positive outcome possible, avoiding both the short-term negative consequences that disrupted the Web 2.0 era and the 2.0 era. Risks of long-term survival. With that dinner, a new idea began to take shape – one that would soon become a full-time obsession for Y Combinator’s Sam Altman and Greg Brockman, who recently left Stripe. Interestingly, the idea was not as technical as it was organizational: if AI were to be introduced to the world in a safe and profitable way, it would require innovation and stakeholder involvement at the level of governance and incentives. The technical route to this area, which is called Artificial General Intelligence or AGI, was not yet clear to the group. But Bostrom and Hawking’s troubled predictions convinced him that the achievement of human intelligence by AIs would unify a surprising amount of power and moral burden, which would ultimately succeed in their discovery and control.

In December 2015, the group announced the creation of a new entity called OpenAI. Altman signed on to become the chief executive of the enterprise, with Brockman overseeing the technology; Another participant at the dinner, Elinet Sutcliffe, co-creator of Alexnet, was appointed by Google as head of research. (Elon Musk, who was also present at the dinner, joined the board of directors, but left in 2018.) In a blog post, Brockman and Sutcliffe set out on their ambition: “OpenAI is a nonprofit artificial-intelligence research. . The company, “he wrote.” Our goal is to advance digital intelligence in a way that is likely to benefit humanity as a whole, uncontrolled by the need to generate financial returns. “He added:” We believe AI should be an extension of individual human desires And, in the spirit of independence, it is as wide and evenly distributed as possible. “

The founders of OpenAI will issue a public charter after three years, which will spell out the main principles behind the new organization. The document was easily interpreted as a very subtle dig in Google’s “don’t be evil” motto from its earliest days, maximizing the social benefits of the new technology – and minimizing the harm – was not always so easy. When Google and Facebook reached global dominance through close-source algorithms and proprietary networks, the founders of OpenAI promised to move in another direction by freely sharing new research and code with the world.

Similar Posts

Leave a Reply

Your email address will not be published.