The New York Times Decides to Apply AI to Its Products and Editorial Team
AI Applied to Editing and Product Teams
On February 17, The New York Times announced its decision to apply AI technology to its product and editorial teams. Internal tools may eventually be used for writing social media copy, SEO headlines, and some code. According to a report by Semafor, the company’s email notification stated that AI training would be provided to editorial staff and a new tool called “Echo” would be launched.
AI Training and Tool Guidelines
The company also shared editorial guidelines for using AI and provided several AI products for staff to develop website products and creative content. “Generative AI helps journalists dig deeper into the truth and helps more people understand the world. Machine learning has already helped us report some stories that were difficult to cover, and generative AI is expected to further enhance our news capabilities,” stated The New York Times editorial guidelines.
Approval and Use of AI Tools
The New York Times has approved a series of AI tools for use by editorial and product teams, including GitHub’s programming assistant, Google’s Vertex AI, NotebookLM, The New York Times’ ChatExplorer, some Amazon AI products, and OpenAI’s non-ChatGPT API via The New York Times’ commercial account (with legal department approval). Additionally, the company announced the launch of “Echo,” an internal testing tool to succinctly summarize The New York Times’ articles, briefs, and interactive content.
Encouraging Editorial Staff to Use AI Tools
The newspaper encourages editorial staff to use these AI tools for generating SEO headlines, summaries, audience engagement content, suggesting edits, raising questions and ideas, querying journalists' documents, participating in research, and analyzing The New York Times’ own content and images. In a series of training documents, the editorial guidelines list possible use cases for journalists, such as:
- "How many times does AI appear in this New York Times report?"
- "Can you make this paragraph more concise?"
- "If you were to post this article on Facebook, how would you promote it?"
- "Write a short summary of this article in clear, conversational language for a news brief."
- "Can you suggest five SEO-optimized headlines for this article?"
- "Can you summarize this Shakespeare play briefly?"
- "Can you summarize this government report in plain language?"
Limitations on AI Use
However, the company has set limitations on AI use, emphasizing potential risks of copyright and source leakage. The company informed editorial staff not to use AI to draft or significantly modify articles, not to input third-party copyrighted material (especially confidential source information), not to bypass paywalls using AI, and not to publish machine-generated images or videos unless demonstrating technology with proper attribution. The company stated that improper use of unauthorized AI tools could cause The New York Times to lose its rights to protect sources and notes.
Legal Battles and Technological Innovation
The New York Times is still engaged in a legal battle with OpenAI, accusing the company of using its content without permission to train models, constituting a severe copyright infringement. OpenAI’s largest investor, Microsoft, has argued that The New York Times’ actions are an attempt to stifle technological innovation.
Conclusion
The New York Times introduces AI tools to enhance the efficiency and quality of news editing and product development. Despite facing legal challenges and usage restrictions, the application of AI technology will bring more opportunities for innovation and development. In the future, as AI technology continues to evolve, The New York Times is expected to gain stronger competitiveness and influence in news reporting and content creation.