AI and You: Google’s Gemini Embarrassing Images, VCs and AI’s Magical ‘Abundance’


Competition among companies including Anthropic, OpenAI, Google, Meta, Microsoft and Perplexity to win over users has led these makers of generative AI chatbots to push the boundaries of what their systems can do and rush updates and new versions to market.

But the current state of Google’s Gemini (formerly Bard) reminds us that gen AI is a work in progress and that developers should take a pause and do the proper quality assurance testing before releasing things to the public. As for Gemini, Google’s large language model has been delivering results that are so off the rails that last week it paused its three-week old image generation function to address « inaccuracies in some historical image generation depictions, » the company said in a Feb. 22 post on X.

What historical inaccuracies are we talking about? « Images showing people of color in German military uniforms from World War II that were created with Google’s Gemini chatbot have amplified concerns that artificial intelligence could add to the internet’s already vast pools of misinformation as the technology struggles with issues around race, » The New York Times wrote in a story under the headline Google Chatbots AI Images Put People of Color in Nazi-Era Uniforms.

« Besides the false historical images, users criticized the service for its refusal to depict white people: When users asked Gemini to show images of Chinese or Black couples, it did so, but when asked to generate images of white couples, it refused, » The NYT added. « According to screenshots, Gemini said it was ‘unable to generate images of people based on specific ethnicities and skin tones,’ adding, ‘This is to avoid perpetuating harmful stereotypes and biases.' »

Oh, if only Google had the resources, including QA teams, to identify these kinds of potential problems before they released tools to its billions of users. 

A day after its social media mea culpa, Google posted a more detailed apology in a blog post under the headline Gemini image generation got it wrong. We’ll do better. It said the results weren’t what it « intended » and led to « images that were embarrassing and wrong. »

« So what went wrong? » Google’s senior vice president Prabhakar Raghavan wrote. « In short, two things. First, our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range. And second, over time, the model became way more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive. » 

Raghavan said the company has learned some lessons, including that it needs to improve the image generation of people, a process that will « include extensive testing. » The company is also recommending that you really take Gemini results with a giant grain of salt and instead rely on Google’s Search for information, especially around recent news. That’s good advice given the examples being posted showing wildly out-of-context answers to some users prompts and questionable results about current events. 

« Gemini is built as a creativity and productivity tool, and it may not always be reliable, especially when it comes to generating images or text about current events, evolving news or hot-button topics, » Raghavan added. « It will make mistakes. As we’ve said from the beginning, hallucinations are a known challenge with all LLMs — there are instances where the AI just gets things wrong. »

Here are the other doings in AI worth your attention.

AI’s magical ‘Abundance’

Not to oversimplify, but many AI debates either tout the benefits of the emerging technology to help empower people and drive humanity forward or discuss how gen AI could be the undoing of jobs, ethics, privacy, security and society. Of course, the truth lies somewhere in the middle.

But let’s dial into week’s discussions on gen AI’s potential (Gemini’s growing pains aside), based on new commentary provided by venture capitalist firm A16z, which was co-founded by browser pioneer and big-time AI investor Marc Andresseen.

Continuing on its 2023 treatises on « Why AI Will Save the World » and « The Techno-Optimist Manifesto, » a16z is now sharing its insights on how AI will help consumers as part of a package of essays listing out opportunities under the label The Abundance Agenda. 

« From the plow to the microchip, every new technology has typically been followed by a period of newfound wealth for consumers in the form of cheaper goods, improved productivity and longer, healthier lives, » A16z says. « Generative AI will be no different, and its benefits for consumers will be as profound as the technology is magical. »

« In a word, we predict an Era of Abundance, » the firm said (Note: That’s actually three words.) 

A16z partner Anish Acharya asks readers to consider « the implications for software that can write poetry, listen emphatically, compose music or provide dating advice. » They include, he says, giving consumers the ability to « rediscover the latent creativity that lives within all of us [since] lacking creative skills will no longer be a barrier to creating art, » to « craft their most impactful and efficient work, as their productivity multiplies and their administrative overhead goes to zero, » to « find new opportunities for connection and belonging » and to gain « access to richer opportunities for personal development and self care. »

Well, that sure sounds lovely, a16z.

But I can’t help but contrast that with the picture painted back in November by entertainment maverick and entrepreneur Jeffrey Katzenberg, co-founder of Dreamworks animation studio, which brought us such classics as Shrek and Kung Fu Panda. (And I mean that sincerely — I’m a big fan of both of those flicks!)

Katzenberg said gen AI is an amazing tool that will deliver new opportunities that will allow movies to be made faster than in the « good old days » when it took 500 artists and five years for him to make a world-class animated movie. But he noted there will be « disruption » and that work will be « commoditized » by new AI tech and that essentially will lead to 90% of animation artists being replaced by AI. 

« I don’t know of any industry that will be more impacted than any aspect of media, entertainment and creation, » Katzenberg said in an interview at Bloomberg’s New Economy Forum. « If you look at how media has been impacted in the past 10 years by the introduction of digital technology, what will happen in the next 10 years will be 10x as great. » 

Actually, he added, the entertainment industry will likely see those productivity gains in the next three years.

As for the ideas behind those films, they will still have to come from « individual creativity. » It’s just those creatives will be feeding the work into an AI tool via prompts, Katzenberg said.

And the animators and other movie-making talent who won’t be needed in the AI world of the near future? Maybe they can share in some of that AI abundance to find new opportunities for connection and belonging. 

DOJ appoints first chief AI officer

US Attorney General Merrick Garland named the Department of Justice’s first-ever chief science and technology adviser and chief AI officer this week, saying the role was needed to keep up with tech developments in a fast-changing world.

« The Justice Department must keep pace with rapidly evolving scientific and technological developments in order to fulfill our mission to uphold the rule of law, keep our country safe and protect civil rights, » Garland said in a statement. 

He appointed Jonathan Mayer, an assistant professor at Princeton’s University’s Department of Computer Science and School of Public and International Affairs, who has researched technology and the law. Mayer’s job includes hiring technical talent and building up the DOJ’s technology capacity. 

The DOJ « has already used AI to trace the source of opioids and other illegal drugs, analyze tips submitted to the FBI, and organize evidence collected in its probe of the Jan. 6, 2021, attack on the U.S. Capitol, » Reuters reported, adding that « U.S. officials have been wrestling with how to minimize the dangers posed by a loosely regulated and rapidly expanding technology while also seeking to exploit its potential benefits. »

The announcement comes as many regulators are looking into ways to put safeguards around the use of AI. President Joe Biden released an executive order on AI in October. And California state Sen. Scott Wiener introduced a bill this month that would create a new office to assess AI regulations in the state. The Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, if approved, would « require every major AI system to have a built-in emergency off-switch, in case something goes wrong. »

Maybe Google has some thoughts on that, Sen. Wiener?

Oops, some lawyers did it again

Who ever said history repeats itself got it right when it comes to chatbots and legal filings.

Last year, lawyers in Texas were fined for using ChatGPT to write briefs that included cases that were made up (hallucinations) by the chatbot. Instead of learning from the situation — or at least double checking that the precedent you’re citing in a legal argument isn’t an AI-generated fabrication — a lawyer in Massachusetts « filed three separate legal memoranda that cited and relied on fictitious cases, » according to LawSites. The lawyer blamed it on an associate and two recent law school graduates who were working on the case with him. 

In another case, a litigant in Missouri who wasn’t a lawyer filed « an appellate brief in which 22 of 24 cases were fictitious. Not only that, but they were fictitious in ways that should have raised red flags, including that they had made-up-sounding generic names such as Smith v. ABC Corporation and Jones v. XYZ Corporation. »

The litigant, LawSites said, « claimed to have gotten the citations from a lawyer he hired through the internet. » The judge in that case didn’t want to hear apologies from the litigant for citing cases that didn’t « exist in reality » and ordered him to pay $10,000 toward his opponent’s legal fees — after dismissing his case.

New York Times experimenting with gen AI ad tool

After being relatively quiet on its use of generative AI before hiring an AI newsroom leader in December, the New York Times is reportedly building a tool that will use AI to « test new ad-targeting solutions, » according to Axios, which cited executives at the NYT’s advertising partners.

The news comes after NYT CEO Meredith Kopit Levien said on the company’s earnings call in early February that it was « actively experimenting » with gen AI tools for its ad products.

« The new technology, which is being created internally at the Times, will deliver a recommendation for where an ad campaign could perform best based on its message or objectives, » Axios reported, adding that the NYT has over 10 million paid subscribers and 100 million plus registered readers. The tool could also be used to « target niche audiences that were previously unidentifiable based on their interests, aspirations and opinions. »

The NYT, which said its ad revenue fell 8.4% to $164.1 million in the fourth quarter of 2023, also said it expects ad revenue to continue to decline in 2024, according to Digiday.

Meanwhile, the media org is pursuing its copyright lawsuit against ChatGPT parent OpenAI, which was filed in December. CEO Kopit Levien said it’s looking at licensing deals with other AI companies « as long as there is fair value exchange and in support of a broader business model. » 

The NYT is also testing AI synthetic voices, she said, with a new product to be released « probably » this quarter that will let people listen to written NYT stories.

AI term of the week: Human in the loop 

When it comes to gen AI and AI systems in general, you’ll often hear that we should remember to keep people involved in how these systems work, which is often summarized as Human in the loop (HITL). 

Even though it may sound obvious, I thought I’d share a few definitions, which offer different ways of thinking about what human/humans in the loop could actually mean.

Humans in the loop (as defined by Stanford University’s Institute for Human-Centered AI): « What if, instead of thinking of automation as the removal of human involvement from a task, we imagined it as the selective inclusion of human participation? The result would be a process that harnesses the efficiency of intelligent automation while remaining amenable to human feedback, all while retaining a greater sense of meaning. Essentially, the human-in-the-loop approach reframes an automation problem as a Human-Computer Interaction (HCI) design problem. In turn, we’ve broadened the question of ‘how do we build a smarter system?’ to ‘how do we incorporate useful, meaningful human interaction into the system?' »

Human in the loop (as defined by Salesforce.com): « Think of yourself as a manager, and AI as your newest employee. You may have a very talented new worker, but you still need to review their work and make sure it’s what you expected, right? That’s what ‘human in the loop’ means — making sure that we offer oversight of AI output and give direct feedback to the model, in both the training and testing phases, and during active use of the system. Human in the Loop brings together AI and human intelligence to achieve the best possible outcomes. »

Human in the loop (as defined by Telus International): « A blend of supervised machine learning and active learning where humans are involved in both the training and testing stages of building an algorithm. This practice of uniting human and machine intelligence creates a continuous feedback loop that allows the algorithm to produce better results each time. » 

Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.



Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

*