The tragic loss of Alison Parker, a television reporter for WDBJ, has led her father, Andy Parker, to dedicate himself to fighting against gun violence and the role of social media in this issue through his organization, Andy’s Fight. Recently, OpenAI CEO Sam Altman testified before Congress about the risks associated with the technology developed by his company, including potential disinformation campaigns and manipulation caused by programs like ChatGPT. This is not a new concern, as many have called for regulation in the past, but now Altman joins the conversation. Altman believes that AI can “cause significant harm to the world,” a sentiment with which Andy Parker agrees.
However, before regulating AI, we must address the regulation of the Internet itself. The challenges posed by AI are already being experienced online. Videos of Alison’s murder have been circulating for almost eight years, generating revenue for Google and Facebook as clickbait. It is essential for Republicans and Democrats discussing regulation to consider this issue without the involvement of individuals like Sam Altman, whose technology has become part of the problem.
Altman’s technology is already in use across social media platforms. I reached out to ChatGPT to confirm if social media algorithms are a form of AI, and they confirmed that they are indeed. Social media platforms such as Facebook, Twitter, Instagram, and YouTube utilize AI algorithms to personalize and curate content for users based on their preferences and behavior.
The harm caused by AI is undeniable, and it manifests in two critical ways. Firstly, AI algorithms have the power to amplify both positive and negative content. Unfortunately, the focus often leans towards sensationalism and engagement rather than responsible information dissemination. The prioritization of clickbait, divisive content, and misinformation raises concerns about their impact on public discourse, social cohesion, and democratic processes. Despite their claims, Google and Facebook continue to profit from Alison’s murder video through AI-powered platforms.
Transparency and accountability are major concerns when it comes to AI algorithms in social media. Users are often unaware of the specific algorithms employed to curate their feeds, making it difficult to understand the biases and potential manipulation at play. AI and social media, despite their differences, are interconnected and influence each other.
With the recent Supreme Court ruling on Gonzales v. Google, which could have held social media companies liable for their content, the responsibility falls on Congress to take action. Congress must address this issue that has the potential to unite both sides of the political spectrum. It is time for lawmakers to act so that individuals harmed by social media can seek recourse. We cannot afford to wait for AI to further impact our lives; it has already caused significant damage.
In conclusion, the collaborative effort between governments, organizations, and researchers is crucial to harnessing the benefits of AI while minimizing risks. Responsible development, transparency, and addressing societal impact should guide the establishment of frameworks, regulations, and ethical guidelines for AI. Congress must prioritize this issue and fulfill their duty to protect citizens from the harmful consequences of unregulated online content. Andy Parker, an advocate for gun safety, urges lawmakers to take action, drawing from his personal experience as the father of Alison Parker, a young journalist tragically killed.
Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.