- INNOVA8 AI
- Posts
- 🤬 It's a big ble@ping deal, LLM training fines, what?, Demand is rising...
🤬 It's a big ble@ping deal, LLM training fines, what?, Demand is rising...
UN steps in the AI game, Google gets a ticket, AI, AI, AI everywhere...
TL;DR
The United Nations has unanimously adopted its first-ever resolution on artificial intelligence, signaling global consensus on the need for governance frameworks to ensure AI is safe, secure, trustworthy, and equitable. While non-binding, the resolution emphasizes human rights, bridging the digital divide, regulatory approaches, and using AI responsibly to achieve sustainable development goals.
France's competition authority has fined Google €250 million ($270M) for failing to comply with its commitments regarding fair negotiations with news publishers over the use of their content. The authority found that Google used publishers' copyrighted content to train its AI model, Gemini/Bard, without proper notification or compensation. This underscores the evolving regulatory landscape around AI and copyright.
In 2023, there was a significant rise in the use of generative AI technology by enterprises, as evidenced by the increase in contracts specifying its use, according to data from Omdia's Enterprise AI Contracts Database. This trend highlights the growing strategic significance of generative AI across sectors such as healthcare, finance, IT, and business services. Virtual assistants, automation, personalized marketing, and report generation are among the key applications of this technology.
🤖 Innova8AI News
🔧 Stepping up the pressure, the UN involved in AI
Global consensus on the need for governance frameworks with AI has been concreted with a non-binding resolution from the UN. In a unanimous vote, the first-ever resolution was adopted. This has broad implications for business leaders.
In the weeds:
The non-binding resolution emphasizes respecting human rights, bridging the digital divide between rich and poor nations, developing regulatory and governance frameworks, and using AI responsibly to achieve UN sustainable development goals.
It warns against improper or malicious use of AI without adequate safeguards and encourages all stakeholders to develop approaches and frameworks for governing AI ethically and equitably.
The resolution signals growing global consensus on the urgent need to get AI governance right, with big powers like the U.S. and China involved alongside over 120 co-sponsors.
The impact: The resolution is a major signal of AI's positive and negative impact on the world. This means that organizations must prioritize governance frameworks and the ethical use of AI when developing solutions. Developers of AI solutions must also consider trust a driving factor before developing any solutions. With a regulation like this, leaders can anticipate new certifications and regulations that could affect project timelines.
“I do think there should be some regualtions on AI.”
🌇 Google lawsuit means trouble on the horizon
↜Gif by snl on Giphy
France’s completion authority has imposed a heavy $270M fine on Google for failing to comply with its commitments regarding fair negotiations with news agencies over using their content for training models.
In the weeds:
A key issue was Google using copyrighted news content to train its AI model Gemini/Bard without properly notifying or compensating publishers, which the authority views as a potential copyright violation.
The decision highlights emerging regulatory risks around training data and intellectual property rights for AI systems that companies must address.
The impact: This is not the first lawsuit of this nature. The New York Times is in a similar lawsuit with OpenAI. It underscores the need for robust governance frameworks and transparency when leveraging publicly available content for AI solutions. As AI capabilities become more strategically important, responsibly developed, and legally compliant, AI will likely become a competitive differentiator.
🗣️ GenAI demand growing in enterprises
Enterprise adoption of Gen AI accelerated rapidly in 2023. Data shows demand surged from 7% in the year's first half to over 38% in the second half. Industries with the highest demand were tech, healthcare, and finance.
In the weeds:
Contracts involving generative AI surged from 7% in the first half of 2023 to 38% in the second half, according to Omdia's Enterprise AI Contracts Database data.
Year-over-year, the number of contracts specifying generative AI increased from just 1 in 2022 to 96 (23% of contracts) in 2023, indicating rapid enterprise adoption.
The top industries adopting generative AI were IT (over 50% of contracts), healthcare, financial services, business services, retail, and media/entertainment.
Key use cases in the contracts included virtual assistants, intelligent process automation, automated report generation, and personalized marketing.
This adoption trend signals generative AI's growing strategic importance and risks if use cases are not properly scoped to align with business needs.
The impact: The demand for generative AI is expected to increase and it could become a significant competitive advantage. Early adopters who invest in creating production-ready and scalable solutions are likely to enjoy greater benefits. To unlock the full potential of generative AI while mitigating risks, companies should adopt an enterprise-wide strategy. Additionally, organizations should be mindful of the growing need for AI talent, which can be difficult to find.
🔥 Hot AI Tools
Turbocharge Governance
TrustVector - Advancing trust between humans and machines
Alation - Data Governance for faster, more informed decisions
Atlan - The AI-powered governance platform
Fiddler.ai - AI Observability Platform
Reply