Stay informed with our newsletter.

Icon
Technology & Science
March 14, 2024

OpenAI should be copying journalists’ principles, not just their content

Your point highlights the importance of not only replicating journalistic content but also adhering to journalistic principles, such as accuracy, fairness, transparency, and accountability. While OpenAI's language models generate text, ensuring that these principles are integrated into their development and deployment processes is essential for maintaining trustworthiness and ethical standards. Incorporating journalistic principles can help mitigate the risks of misinformation and ensure that AI-generated content upholds the integrity and values of journalism.

Lawsuits by The New York Times and other media outlets against OpenAI over the unauthorized use of copyrighted material to train AI models is the latest skirmish in the years-long struggle between news publishers and Big Tech. Tech companies have seen massive success innovating and scaling digital products, often on the back of content “borrowed” from publishers. Platforms such as Google, Facebook, and X use news to acquire and engage users on their platforms while tracking every detail about those users’ behaviors and interests. The scale and fidelity of this consumer data is a goldmine, especially when deployed to target the right advertising to the right person at the right moment. Publishers simply haven’t been able to compete in the contest for ad dollars.

Meanwhile, the flood of misinformation and disinformation across the Internet, and tech platforms’ algorithmic promotion of emotionally charged content that drives clicks, have undermined consumer trust in media. The result has been disastrous for the news industry.

Now, Big Tech is facing a trust crisis of its own. While the potential benefits of AI are enormous, 86% of consumers think companies should come together to set clear, uniform standards and practices for their use of AI. Additionally, according to recent research by the Pew Research Center, the most concerned consumers were likely to have stopped using a digital device, website, or app altogether due to worry about how their personal information was being used. We can expect this trend to continue with the explosion of AI-generated content and its integration into all sorts of daily activities. Overwhelmingly, consumers say they want data privacy, choice and control over how their data is used, and accountability from companies when it comes to the responsible use of their data.

To maximize the value that AI innovation can have on business and society while minimizing harm to individuals and shared values, tech is going to have to address these serious concerns. And perhaps they can take a page out of their own book: Borrow from news media.

Media companies like the BBC, where I work, have developed broad-minded frameworks to identify the roots of the crisis of confidence in journalism and to address audience concerns. BBC News’ Verify unit was established after research revealed five consumer expectations from news organizations: fairness, transparency, respect, clarity, and courage. We apply these principles throughout our news creation process. Big Tech can apply these principles as well.

In news, fairness is about balance. Our journalists research all aspects of an issue and present a range of perspectives, with due weight given to the various sides and layers of nuance. Do we get this right 100% of the time? No, but we are transparent and issue corrections when errors occur. We respect our audience’s intelligence by not talking down to them and we respect their time by eschewing clickbait and other tricks. We explain complicated topics and cut through chaos to clarify key issues and context.

How Big Tech can adopt these principles

On fairness, tech companies can ensure AI models are trained from sources that are balanced and don’t skew to the perspective of a privileged group or perpetuate societal biases. This is critical so that we do not repeat the mistakes of the past. The historical examples are plentiful: An investigation by ProPublica in 2016 found that COMPAS software, widely used by courts to predict the likelihood that a criminal would commit future crimes and determine sentencing, wrongly misclassified Black individuals as future criminals at twice the rate of their white counterparts; Amazon had to stop using hiring software that demonstrated gender bias; IBM, Amazon, and Microsoft sold facial recognition software to police departments, which was less accurate for non-white individuals.  

They can also be more transparent with the public about how they are using personal information and other data to drive their models or influence outcomes. They can communicate with clarity–in simple language and at a readable font size–what user data they collect, how they use it, who they will share it with, and how long they plan to store it.

An overwhelming majority of Internet users (90%) believe we should have choice and control over our data. We want companies to respect us as human beings. We are people, not passive data farms. That’s why organizations dedicated to tech for good like the Ada Lovelace Institute and Ethical Tech Project, on whose board I serve, recommend “agency and autonomy” as a guiding principle for tech companies implementing better data privacy practices. 

People expect institutions they trust to do the right thing. That takes courage. For journalists, that means holding power to account, asking difficult questions, going to dangerous places and bringing back the news, ensuring an eyewitness to history. For AI firms, this principle might apply to more existential questions: balancing automation with human insight, being open and honest about advances toward Artificial General Intelligence (AGI), and making nontraditional leadership decisions–ensuring there is interdisciplinary input in the C-suite and on boards.

Clearly, both publishing and Big Tech are at an inflection point–with each other and the society they serve. But ethical data practices are not just about doing good, they are about doing good business. We should all be working to protect what matters most: business interests, yes, but also individual agency, shared values, and institutions that foster a stable, open, fair, and democratic society.

Jennie Baird is the Chief Product Officer of BBC Studios and previously led the company’s Global Digital News and Streaming business. She is a board member of The Ethical Tech Project.

More must-read commentary published by Fortune:

Sourced from Fortune

Stay informed with our newsletter.