Singapore markets closed
  • Straits Times Index

    -26.98 (-0.81%)
  • Nikkei

    +94.09 (+0.24%)
  • Hang Seng

    -170.85 (-0.94%)
  • FTSE 100

    -16.81 (-0.21%)
  • Bitcoin USD

    -1,375.54 (-2.06%)
  • CMC Crypto 200

    -39.20 (-2.68%)
  • S&P 500

    -10.46 (-0.19%)
  • Dow

    -66.72 (-0.17%)
  • Nasdaq

    -21.14 (-0.12%)
  • Gold

    +27.00 (+1.16%)
  • Crude Oil

    +0.07 (+0.09%)
  • 10-Yr Bond

    -0.0230 (-0.54%)
  • FTSE Bursa Malaysia

    -2.85 (-0.18%)
  • Jakarta Composite Index

    -96.73 (-1.42%)
  • PSE Index

    -7.13 (-0.11%)

Generative AI is in the crosshairs at Meta and Alphabet annual meetings as shareholders vote for detailed reports

Generative AI is on the ballot at Big Tech companies like Meta and Alphabet, as shareholders vote on proposals calling for detailed reports by the companies about how the technology is being used to create and spread misinformation. The shareholder proposals, which will be voted on at each company’s upcoming annual general meeting, reflect growing concerns about the power and prevalence of generative AI.

The votes come as Google has faced massive blowback over the past week for its new search AI features, which has produced false and often bizarre responses, including telling users to put glue on pizza. Audio and video deepfakes created by AI technology are also proliferating online, raising concerns in an election year.

Shareholders in Meta, the parent company of Facebook and Instagram, will vote at the company’s annual meeting on Wednesday, while investors in Google-parent company Alphabet will weigh in at the company’s annual meeting on June 7. The resolutions come just a few months after a similar proposal at Microsoft’s AGM on December 7, 2023, which was presented by Nirvana co-founder and bassist Krist Novoselic and garnered 21% of the shareholder vote.

The three proposals were led by ESG activist investor group Arjuna Capital and focus on concerns that generative AI threatens to amplify misinformation and disinformation around the world, particularly during a critical election year in countries like the US and India. A separate shareholder proposal brought to Apple focused on the risks of AI to workers, which was filed by the AFL-CIO Equity Index Funds, the largest labor union federation in the US. While the proposal was ultimately voted down by Apple shareholders at its annual meeting in February, proponents of AI transparency won a victory beforehand when the SEC ruled that companies like Apple could not bar shareholders from voting on such matters.


In the proxy statement for Meta's annual meeting, the shareholder proposal states that "with Meta’s recent development of gAI products, including conversational assistants and advertising tools, the company is increasingly at risk from misinformation and disinformation generated through its own products."

The proposal calls for the Meta board to “issue a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated annually thereafter, assessing the risks to the Company’s operations and finances, and to public welfare, presented by the Company’s role in facilitating misinformation and disinformation disseminated or generated via generative Artificial Intelligence; what steps the Company plans to take to remediate those harms; and how it will measure the effectiveness of such efforts.”

In its response urging shareholders to reject the proposal, Meta pointed to its “five pillars of Responsible AI” which are overseen by its board of directors, as well as its investments in safety and security efforts to combat misinformation and disinformation.

“Given our ongoing efforts to address this topic, the board of directors believes that the requested report is unnecessary and would not provide additional benefit to our shareholders,” the Proxy Report said.

In a response to a similar proposal, Alphabet said that its Board of Directors recommended a vote against the stockholder proposal because “Our enterprise risk frameworks, product policies, and tools provide a foundation for identifying and mitigating AI-generated mis/disinformation and other potential risks,” adding that “we continually strive to improve the quality of our generative AI models and applications through both pre-launch testing and ongoing fine-tuning, and we are transparent about our ongoing work via public reporting.”

The proposals for AI reports at Meta and Alphabet are unlikely to be approved, as both companies have dual class stock that concentrates the voting power in the hands of the founders. Still, the proposals carry symbolic power and have the potential to send a message to management if they earn widespread support from shareholders.

At December’s Microsoft AGM, Novoselic served as the spokesperson for the shareholder initiative, describing himself as a long-time shareholder. He accused the company of racing forward, “releasing this nascent technology without the appropriate guardrails.” Generative AI, he continued, “is a game-changer, there’s no question, but the rush to market seemingly prioritizes short term profits over long-term success.”

This story was originally featured on