Google‘s flagship GenAI model, Gemini, has raised concerns due to its ability to generate deceptive content upon request, sparking criticism from policymakers who are alarmed by the potential for disinformation and misleading information. In response, Google is directing resources towards AI safety, with the establishment of a new organization called AI Safety and Alignment within Google DeepMind, the company’s AI R&D division.
According to Google, the AI Safety and Alignment organization will focus on addressing safety concerns related to artificial general intelligence (AGI), as well as developing safeguards for existing and future GenAI models. This initiative aims to mitigate risks associated with misinformation, bias amplification, and other injustices.
Anca Dragan, a former Waymo staff research scientist and UC Berkeley professor, will lead the team. Dragan emphasized the importance of enabling AI models to understand human preferences and values better, while also enhancing their robustness against adversarial attacks and uncertainty. Despite concerns about her dual roles, Dragan believes that her work at UC Berkeley and DeepMind are complementary and will contribute to addressing present-day concerns and long-term risks associated with AI.
However, scepticism surrounding GenAI tools remains high, particularly regarding their potential to spread misinformation and deepfakes. Surveys indicate widespread concern among the public and enterprises about the reliability and ethical implications of AI technologies. Despite efforts to improve AI safety, challenges persist, and there are doubts about the ability to fully mitigate risks associated with AI deployment.
Anca Dragan, leader of the AI Safety and Alignment organization, stated, “Our work aims to enable models to better and more robustly understand human preferences and values, to know what they don’t know, to work with people to understand their needs and to elicit informed oversight, to be more robust against adversarial attacks and to account for the plurality and dynamic nature of human values and viewpoints.”
In conclusion, while Google and DeepMind are committed to enhancing AI safety, the effectiveness of these measures remains uncertain. The public and regulators will likely continue to scrutinize AI technologies and hold companies accountable for addressing ethical concerns and risks associated with AI deployment.