Here are some Artificial Intelligence (AI) News as of April 2024 that we have found interesting across the Web.
Proposed Legislation Requires AI Firms to Disclose Utilization of Copyrighted Artwork
Key Takeaway
The Generative AI Copyright Disclosure Act proposed by Congressman Adam Schiff aims to hold AI companies accountable for using copyrighted content by requiring them to disclose the copyrighted materials in their AI training datasets, amid increasing legal scrutiny and copyright infringement litigation.
Summary
– Adam Schiff, a Democratic congressman from California, has introduced a bill in response to the growing legal debates surrounding the usage of copyrighted works by major AI companies in their generative AI models.
– The proposed legislation, named the Generative AI Copyright Disclosure Act, mandates AI companies to report any copyrighted materials used in their AI training datasets to the Register of Copyrights prior to launching new AI systems.
– Companies would need to comply with this disclosure requirement at least 30 days before making their AI tools public, facing financial penalties for non-compliance.
– The bill aims to balance the transformative potential of AI with the need for ethical guidelines and protection of intellectual property, without outright banning the use of copyrighted material for AI training.
– This legislative effort has garnered support from various notable entertainment industry organizations and unions, including the Recording Industry Association of America and the Screen Actors Guild-American Federation of Television and Radio Artists, highlighting the concern for protecting human creative content.
– High-profile AI firms like OpenAI, the creators of ChatGPT, are currently facing copyright infringement lawsuits from entities such as comedian Sarah Silverman and the New York Times, raising significant questions about the legality of using copyrighted material under the fair use doctrine.
– The controversy stems from claims by AI companies that their usage of copyrighted content is legal under fair use, a stance that is under legal scrutiny and could have major implications for both artists’ earnings and the operational feasibility of AI technologies.
– Additionally, there is an ongoing discourse within the creative industries about the impact of generative AI on artists’ rights, exemplified by a recent open letter from over 200 musicians demanding greater protections against AI technologies that could potentially displace human creativity.
Texas Implements AI to Take Over the Role of Human Test Scorers by the Thousands
Key Takeaway
Texas is implementing an AI-powered “automated scoring engine” to grade student exams, aiming to improve efficiency and reduce costs, although concerns about its accuracy and fairness persist.
Summary
– Texas Education Agency (TEA) is rolling out an artificial intelligence-powered system to grade state-mandated STAAR exams, potentially saving $15-$20 million annually.
– The new system, named an “automated scoring engine,” utilizes natural language processing technology similar to that of chatbots like OpenAI’s ChatGPT for grading open-ended questions.
– The STAAR exams have recently been redesigned to contain more open-ended questions, increasing the need for efficient scoring mechanisms.
– In 2024, the TEA plans to reduce the number of human graders from 6,000 to under 2,000, relying more on this AI system.
– The automated system was trained on 3,000 exam responses that had already been graded by humans twice.
– To ensure accuracy, a quarter of the AI-graded exams will be reviewed by human graders, along with any answers that the AI system finds confusing or cannot understand.
– Some educators express concern over the reliability of the AI system, citing instances where it significantly increased the number of zero scores on constructed responses.
– Although AI essay-scoring engines have been used in at least 21 states, TEA’s approach seeks to distinguish itself by emphasizing the system’s closed, non-progressively learning nature to avoid common pitfalls associated with AI.
– The initiative has sparked debate among educators and students, particularly regarding the ethical implications of leveraging AI in educational assessments.
Phony AI Legal Practices Dispatch Bogus DMCA Warnings to Fabricate Illegitimate SEO Advantages
Key Takeaway
Fake law firms are utilizing generative AI to create false DMCA infringement notices for SEO manipulation.
Summary
– A journalist received a DMCA Copyright Infringement Notice from “Commonwealth Legal,” claiming representation of the “Intellectual Property division” of Tech4Gods over a keyfob photo from Unsplash.
– The notice demanded credit addition to their client and stated removing the image would not resolve the issue.
– Upon investigation, several red flags were identified with Commonwealth Legal’s request:
– The firm, claiming to be based in Arizona (a state, not a commonwealth), likely does not exist.
– The firm’s website was recently registered with a Canadian IP location and the physical address did not match the indicated “fourth floor.”
– The website used stock images and AI-generated faces for attorney profiles.
– Attorney bios provided were superficial, with bizarre qualifications and backgrounds.
– The primary motive behind these fake DMCA notices appears to be obtaining backlinks for SEO benefits.
– The owner of Tech4Gods denied any involvement, suggesting a possible act by a disgruntled former contractor.
Ernie Smith, the journalist received DMCA notice, had not heard back from Commonwealth Legal after the specified five business days.
– The case exemplifies the misuse of AI and legal threats for SEO manipulation, raising concerns over the evaluation of backlink quality by search engines.
Meta’s Head of AI: Large Language Models Incapable of Achieving Human-Level Intellect”
Key Takeaway
Yann LeCun, Meta’s AI chief and a Turing Award winner, argues that Large Language Models (LLMs) will not achieve human-level intelligence due to their lack of cognitive capabilities like reasoning and understanding the physical world. He advocates for “objective-driven AI” as a more promising path towards advanced intelligence.
Summary
– Yann LeCun, a prominent figure in AI research and Meta’s Chief AI Scientist, has expressed skepticism about the current trajectory of AI development, specifically doubting that LLMs can reach human-level intelligence.
– He disagrees with other tech leaders who predict the imminent arrival of Artificial General Intelligence (AGI), challenging the notion that human intelligence is general and advocating instead for developing “human-level AI.”
– During an event in London, LeCun identified four cognitive challenges that AI must overcome to approach human intelligence: reasoning, planning, persistent memory, and understanding of the physical world.
– LeCun criticizes LLMs for their reliance on textual information, arguing that their apparent fluency in language masks a superficial understanding of reality. He highlights the vast difference between the data processed by LLMs and the richer, more varied experiences humans gain through interaction with the world.
– He argues that most human knowledge is non-linguistic and that LLMs’ architecture fundamentally limits their potential to achieve human-like intelligence.
– LeCun proposes an alternative approach he terms “objective-driven AI,” where AI systems learn about the world through sensory data and video, building a “world model” that helps them understand the consequences of actions and predict outcomes.
– He is optimistic about the future of objective-driven AI, suggesting that it could eventually surpass human intelligence, although not as soon as some have predicted.
LeCun’s stance highlights a significant debate within the AI research community about the best path forward to achieving more advanced forms of artificial intelligence.
Leave a Reply