Given the limitless advance of AI, OpenAI has launched a Division in charge of controlling the most powerful and advanced AI systems, which surpass human intelligence. The objective is none other than align them with the values and objectives of humanity. To this governance and security against superintelligent AI systems it is known as ‘super alignment’.
The goal of ‘superalignment’ is none other than to create AI systems that do not excel in their computational capabilities and that act in a way that is beneficial and safe for humanity. In this sense, OpenAI works using computing resources to scale superintelligent AI alignment efforts and put them in line with human values in a harmonious way.
The Open AI ‘superalignment’ team is led by Open AI co-founder and chief scientist, Ilya Sutskeverwhich is precisely part of the team that pressed for the expulsion of Sam Altman of the company.
Keys to the ‘super-alignment’ process
Within AI research, ‘superalignment’ is considered a still premature subfield. First of all, it requires the development of a scalable training methodologyso that AI can be taught and adapted efficiently with human values.
Subsequently, it will be necessary to subject it to a rigorous validation process and finally adjust it to stress tests to comprehensively identify possible failures or potential vulnerabilities in the system. Only in this way will full solidity and reliability be achieved.
With ‘super-alignment’ we seek precisely avoid unwanted results As AI systems become more advanced and their impact on society becomes more pronounced, potentially affecting human well-being. In this way, you will achieve that AI systems are valuable tools.
As proactive measure‘superalignment’ ensures that superintelligent AI is an ally for human progress, establishing a clear paradigm shift towards a beneficial current for humanity.
The OpenAI Job Outlook
The Open AI ‘superalignment’ team was born in July. At the time, Altman invited comparisons between OpenAI and the Manhattan Projectexplaining how to protect oneself against catastrophic risks that superintelligent AI could bring with it.
The OpenAI ‘super alignment’ team has been working on the configuration governance and control frameworks that could be applied to future powerful AI systems. To do this, it is important to know and correctly manage the most sophisticated systems such as GPT-4 towards optimal directions.
The weakest AI models, such as GPT-2they will not be able to understand the complexities and nuances of superintelligent AI, so efforts will have to be greater.
The ‘super alignment’ team is responsible for set a series of tags that allow general aspects to be communicated, starting from the weakest AI models, even if they present errors and biases. Thus, advances could even be generated in the field of hallucinations.
In this way, in the face of possible imperfections, OpenAI intends generate a series of ideas collectively that allows us to differentiate reality from fiction, although it escapes the power of the human mind.
Subsidies to ‘superalignment’ in Open AI
In this aspect, it has a $10 million grant program to support technical research on ‘superalignment’, intended for academic laboratories, non-profit organizations, individual researchers and graduate students. Furthermore, it provides organize an academic conference on this in early 2025 to promote the work of the ‘super alignment’ award finalists.
Part of the funding for the grants will come from former Google CEO and Chairman, Eric Schmidtwho a clear defender of the idea that more dangerous AI systems are coming and that regulators are not sufficiently prepared to stop them. His fortune is estimated at around $24 billion, having already invested hundreds of millions in other AI companies.
For personalities like Schmidt, it is essential can align new AI models with human values transparently and securely. In this way, he sees it as a joint benefit for humanity, being a beneficial and fully necessary research, in his opinion.