Afua Bruce emphasizes that shaping AI is a collective responsibility, not exclusive to tech experts, as it can have both positive and negative impacts on society.

Artificial intelligence has been the subject of much discussion lately, with terms like “superpower,” “catastrophic,” “revolutionary,” “irresponsible,” “efficiency-creating,” and “dangerous” being used to describe it. The recent release of ChatGPT to the public has brought AI into the spotlight, leading many to question how it differs from other technologies and what the future will look like when our business and personal lives are completely transformed.

First and foremost, it’s important to recognize that AI is simply a tool created by humans. In our book, The Tech That Comes Next, Amy Sample Ward and I emphasize that technology is shaped by human beliefs and limitations. Despite the portrayal of AI as a completely self-sufficient and self-teaching technology, it is actually governed by the rules programmed into its design. For example, when I asked ChatGPT about the best country for jollof rice, it responded by acknowledging its role as an AI language model and explained that the question is subjective and depends on personal preferences.

This response reflects a deliberate design choice made by the AI programmers to avoid providing specific answers based on cultural opinion. While users can ask ChatGPT about more controversial topics, they will still receive similar responses due to this design choice. In recent months, ChatGPT has also been modified to address accusations of sexism and racism in its responses. We should hold developers to high standards and demand transparency and inclusivity in the process of defining boundaries for AI tools.

While designers have significant power in shaping AI tools, industry leaders, government agencies, and nonprofit organizations also have the ability to decide when and how to implement AI systems. Generative AI may impress us with its abilities to create content and perform various tasks, but it’s important to understand that AI is not a one-size-fits-all solution. Rather than succumbing to technological hype, those responsible for using AI should first consult with the communities affected by its application and ask about their needs and aspirations. This input should guide developers in setting constraints and making informed decisions about whether and how to use AI.

An example that highlights this approach is the mental health app Koko, which tested GPT-3 to counsel individuals but discontinued the test due to a lack of human connection. The affected community made it clear that they preferred trained human therapists. Despite the widespread conversation about AI, its use is not mandatory. Relying solely on AI systems for medical services, housing prioritization, or recruitment and hiring can have disastrous consequences, leading to exclusion and harm on a large scale. Recognizing that choosing not to use AI can be just as powerful as deciding to use it is crucial.

Underlying all these considerations are fundamental questions about the quality and accessibility of the datasets that power AI. AI works by processing existing data to make predictions or generate new content. If the data is biased, unrepresentative, or lacks diversity, then the outputs of AI systems can perpetuate these biases. To combat this, researchers and advocates working at the intersection of technology, society, race, and gender should inform responsible AI development. For example, Safiya Noble’s research led to Google updating its search results to address biased outcomes.

Efforts are also underway to involve communities in shaping AI systems before they are deployed. Researchers from Carnegie Mellon University and the University of Pittsburgh used AI lifecycle comic boarding, which translates AI reports and tools into easily understandable descriptions and images, to engage frontline workers and unhoused individuals in discussions about an AI-based decision support system for homeless services. This approach allowed them to understand the system and provide valuable feedback to the developers. This highlights the importance of combining technology with societal context to shape AI effectively.

Moving forward, society as a whole has a role to play in balancing the design and use of AI systems and mitigating the potential harms they can cause. Technologists, organizational leaders, policymakers, funders, investors, and communities all have responsibilities in this process. Technologists and organizational leaders must consider ethical design and deployment, policymakers should establish guidelines to minimize harm, funders and investors should prioritize human-centric AI systems and community engagement, and communities should provide input and analysis. By adopting a cross-sector, interdisciplinary approach, we can create more equitable AI systems.

There are already inspiring examples of AI being used for the benefit of society in equitable ways. Farmers in India, Ethiopia, and Kenya can access agricultural knowledge in their local languages through the use of Gooey.AI on WhatsApp. The African Center for Economic Transformation is developing a program to conduct AI sandbox experiments in economic policymaking across multiple countries. Researchers are also exploring how AI can revitalize Indigenous languages, such as the Cheyenne language in the western United States.

These examples demonstrate the potential for AI to be utilized in ways that prioritize equity. History has shown that the negative effects of technology can compound over time, and it is not the sole responsibility of the tech community to address these issues. It is our collective responsibility to improve the quality and use of AI systems in relation to our lives and communities.

Reference

Denial of responsibility! VigourTimes is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
Denial of responsibility! Vigour Times is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment