Google suggests regulations
Tech behemoth Google has submitted recommendations to the Australian government on how to regulate AI.
The company says copyright laws should be softened to facilitate AI training, flexibility in international data sharing, and avoidance of undue blame on tech giants for AI misuse.
In response to an inquiry regarding the establishment of responsible AI guidelines in Australia, Google's submission expressed concerns about the potential loss of talent and opportunities if the country fails to adopt copyright laws conducive to AI advancement.
The company urged for balanced legislation that supports the training of AI and avoids placing excessive burdens on explaining AI decision-making processes.
Science and Industry Minister Ed Husic initiated the call for industry submissions in June to strike a delicate balance between guarding against AI-related risks and promoting local innovation.
Google's submission recommends a legal framework for AI innovation that does not hinder expansive AI development initiatives.
Some key suggestions include:
-
Competition safe harbours to enable cross-industry collaboration on AI safety research
-
Proportional privacy laws to facilitate secure data flows across national borders
-
Data usage legal framework for training AI models using internet data
-
Liability clarity to define responsibility for misuse of AI systems
-
Copyright system overhaul enabling fair use of copyright-protected content
Google drew a comparison with innovation-friendly legal environments such as the United States, highlighting the need for Australia to evolve its copyright laws to avoid hindering AI research and development.
The submission also underscored the risk of losing innovation talent and international investment if legal uncertainties persist.
The tech giant addressed concerns about accountability for AI misuse, asserting that organisations should not bear the responsibility for others' misuse of their AI technology.
The company recommended that the government provide procedural guidance on risk assessment while allocating responsibility for validation to the deploying organisation.
While Google acknowledged the importance of transparent AI decisions, it cautioned against excessive requirements, arguing that fully explaining every AI outcome could stifle AI's potential and undermine its societal and economic benefits.