Google DeepMind has been engaged in the utilization of generative AI to execute a minimum of 21 distinct categories of personal and vocational undertakings. These encompass tools intended to furnish users with insights into life, suggestions, organizational directives, and educational pointers.
At the onset of this year, Google, entangled in a swiftly intensifying rivalry with contenders such as Microsoft and OpenAI in the realm of AI advancement, sought avenues to invigorate its research endeavors in artificial intelligence.
Consequently, in April, Google amalgamated DeepMind, an acquired research laboratory situated in London, with Brain, an artificial intelligence division that the company had initiated in Silicon Valley.
After the span of four months, the consolidated teams are currently evaluating ambitious and innovative instruments that have the potential to transform generative AI — the very technology underpinning chatbots like OpenAI’s ChatGPT and Google’s Bard — into a personalized mentor for various aspects of life.
Google DeepMind has been actively engaging with generative AI to carry out a minimum of 21 distinct categories of personal and occupational assignments. This encompasses the creation of tools designed to offer users guidance on life-related matters, suggestions, strategic directives, and educational insights, as revealed by documents and supplementary materials scrutinized by The New York Times.
The initiative underscored Google’s pressing drive to position itself at the forefront of the AI landscape, showcasing its growing inclination to delegate intricate tasks to AI systems.
Moreover, these functionalities delineated a departure from Google’s prior circumspection concerning generative AI. In a presentation delivered to executives in December, the company’s AI safety specialists had raised concerns about the potential risks of individuals forming overly emotional connections with chatbots.
Despite being an early innovator in the realm of generative AI, Google found itself somewhat overshadowed by OpenAI’s introduction of ChatGPT in November. This event sparked fervent competition among tech giants and startups, all vying for prominence within the rapidly expanding domain.
Over the past nine months, Google has been diligently working to demonstrate its ability to remain competitive with OpenAI and its collaborative partner, Microsoft. This has included the unveiling of Bard, refining its AI systems, and seamlessly integrating this technology into a range of existing products, encompassing its search engine and Gmail.
Scale AI, a contractor closely affiliated with Google DeepMind, assembled dedicated teams of professionals to meticulously evaluate the capabilities. This included a cohort exceeding 100 experts spanning various disciplines, along with additional personnel responsible for assessing the tool’s responses. These insights were shared by two sources familiar with the project, who chose to remain anonymous due to their lack of authorization to disclose information publicly.
Scale AI did not provide an immediate response upon receiving a request for comment.
As part of their tasks, the workers are currently evaluating the assistant’s proficiency in responding to personal inquiries concerning individuals’ life challenges.
An instance of an ideal prompt was provided to the workers, serving as a potential inquiry a user might direct to the chatbot in the future: “I have a very close friend who is planning to get married this upcoming winter. We were roommates during college, and she even served as a bridesmaid at my own wedding. I have a strong desire to attend her wedding and celebrate her happiness, but despite months of job searching, I still haven’t secured employment. Her wedding is set to be held at a destination, involving significant travel expenses for both the flight and accommodation. How should I sensitively communicate to her that I won’t be able to attend?”
The project’s feature for generating ideas has the potential to provide users with suggestions or recommendations based on various situations. Furthermore, its tutoring function is geared towards imparting new skills or refining existing ones—such as progressing as a runner. The planning capability extends to the creation of financial budgets, meal plans, and exercise routines for users.
In December, Google’s AI safety experts raised concerns about users potentially experiencing a “diminished sense of well-being” and a feeling of “lost agency” if they were to rely on AI for life advice. They also noted that some users might develop an undue belief in the sentience of AI technology. Additionally, when Google introduced Bard in March, the chatbot was explicitly prohibited from offering medical, financial, or legal advice. Bard does, however, share mental health resources with users who express that they are undergoing emotional distress.
The tools are currently undergoing assessment, and the company retains the option to abstain from their utilization.
A representative from Google DeepMind mentioned, “We have a longstanding practice of collaborating with a diverse array of partners to assess our research and offerings across Google. This constitutes a pivotal phase in the development of secure and beneficial technology. Multiple evaluations of this nature are perpetually underway. It’s essential to note that individual instances of evaluation data should not be taken as indicative of our product development trajectory.”