Italian data authority to evaluate AI platforms for privacy and legal compliance

Garante, the data protection authority of Italy, is set to undertake an assessment of diverse AI platforms and bring in specialists in the field of AI. Agostino Ghiglia, a member of Garante’s board, stated that the objective is to evaluate whether these tools are adequate and tackle issues concerning data protection and adherence to privacy laws. Should it be deemed necessary, Garante will initiate further investigations based on the outcomes of this evaluation.

Garante’s current initiative to evaluate AI aligns with their ongoing focus on scrutinising the technology. This focus was intensified following the temporary ban of ChatGPT in March due to concerns over privacy and the risk of a data breach. However, the ban was eventually lifted after OpenAI fulfilled the requirements outlined by the Italian watchdog. OpenAI’s compliance involved various measures, such as providing comprehensive information regarding data collection and usage, introducing a new mechanism for EU users to express objection to their data being utilised for training, and implementing an age verification tool for users.

G7 leaders call for trustworthy AI: Focus on standards and democratic values

G7 leaders have called for the development and adoption of technical standards to ensure the trustworthiness of AI. While acknowledging the potential for different approaches in achieving trustworthy AI, the leaders stressed the importance of regulations for digital technologies, including AI, aligning with shared democratic values. The leaders expressed concerns that the governance of AI has not kept up with its rapid advancement. G7 leaders also agreed that ministers would discuss the technology as the ‘Hiroshima AI Process’ and deliver the results by the end of the year, conforming to a working lunch outline.

In previous weeks, the global attention was directed towards the regulation of AI. In the EU, AI Act has received approval from the Civil Liberties and Internal Market committees of the European Parliament. This significant step means that the proposal will now progress to plenary adoption in June, marking the final stage of the legislative process – which will involve negotiations with the EU Council and Commission.

The US regulators approach AI regulation with greater caution, and currently, there is a heated debate without any definitive steps taken. Last week, the global media focus was on Sam Altman’s testimony before US Congress, where the CEO of OpenAI expressed concerns about AI and called for regulatory measures. Altman outlined his plan for regulating AIproposing the formation of a new government agency responsible for licensing large AI models, with authority to revoke licences from companies that fail to meet government standards. He emphasised the importance of establishing safety standards for AI models, including evaluating their dangerous capabilities

In the Far East, China has adopted a more limited approach to AI regulation, releasing draft laws aligning with its socialist ideals.

New York City public schools lift restrictions on ChatGPT use

The New York City school system recently lifted its restriction on the use of ChatGPT in public schools, allowing access to this technology. Previously, access to ChatGPT in New York City public schools was restricted due to concerns about potential misuse. Such a prohibition also applies to websites like YouTube, Netflix, and Roblox, which require schools to request access for their staff and students.

Chancellor David Banks stated in an opinion piece that the decision to lift the restriction was made after consulting experts. The school system plans to provide resources and support to help educators and students learn about and explore AI technology, including successful implementation examples and a toolkit for classroom discussions. The schools will also gather information from experts to assist in using AI tools effectively.

The chancellor added that Generative artificial intelligence has the potential to cause significant shifts in society, and it is essential to ensure that the benefits of this technology are distributed fairly to prevent widening socioeconomic gaps. It is crucial to educate students about AI’s ethical concerns and prepare them for AI-related job opportunities.

Italy is investing money to protect workers from the threat of AI substitution

Italy allocated 30 million euros to up-skill the unemployed and workers whose jobs are most at risk due to advances in automation and Artificial Intelligence.

According to the Reuters report, the funding for educational improvement will come in two ways : €10 million of this will go towards qualifications for people at high risk of their jobs being replaced by automation and technological innovation, the remaining €20 million will be used to support the development of digital skills that increase the chances of unemployed and inactive people entering the labor market.

The proliferation of AI and automation, and the rapid development of tools like ChatGPT, have captured the attention of legislators and regulators in several countries. For that, many experts argue that new regulations are needed to regulate AI because of its potential impact on national security, education and jobs.

The UK trade union raising concern on the ethical use of AI in the workplace

The Trade Union Congress (TUC) in the UK, shared its opinion on the new UK initiative to establish a single AI watchdog institution for future AI development oversight. The TUC is arguing that the new UK law will dilute the rights of human workers, and has called for stronger protections against AI that is making decisions about workers’ lives and employment. In particular, AI is used in facial recognition technology (to analyze expressions, accents, tone of voice), in devices that record data about worker activity, which can then be analyzed, or in recruiting systems that carry an inherent bias towards certain groups of workers. In their words the UK government didn’t do enough to ensure AI is used ethically in the workplace. They also urged for the inclusion of workers in discussions around AI regulation.

In the response from the UK government, legislators claimed that safeguards will remain in place. Furthermore, AI development will bring more jobs to the market, which will further help the economy grow and, consequently, improve worker conditions.

Is Google’s reign over? Samsung considers replacing Google with Microsoft’s Bing

Samsung is considering replacing Google with Microsoft’s Bing as the default search engine on its devices, reported New York Times. The report with the statement from Samsung, caused Google’s shares to drop by 4%, raising concerns at the company as an estimated $3 billion in annual revenue is at risk. Additionally, the potential loss of a $20 billion Apple contract, up for renewal this year, has further increased the company’s worries. These developments could have a significant impact on the global search engine market and its users. Consequently, Google is reportedly in ‘panic’ mode, with executives and engineers scrambling to make radical changes to the search engine to keep pace with rivals who have already integrated AI technology into their products.

To combat this threat, Google is working on an ‘all-new’ version of Google Search that incorporates predictive, conversational, and revenue-generating features, along with other AI initiatives under the project name ‘Magi.’ However, Google has a track record of keeping its AI technology under wraps in its research vault, so it remains to be seen whether these projects will see the light of day.

What implications does this have for Google and (its) users worldwide?

Given the advancements made by its competitors in AI, Google’s dominance in the market could be threatened. However, for users, this could translate into a more level playing field among search engines, resulting in increased competition and a broader range of options.