OPENAI 官网的内容
引言
在刷 twitter 的时候看到了一个推:
我可不可以说:一个 AI 产品经理没有把 OpenAI 官网所有东西看一遍是不合格的?
所以我觉得我可以仔细看看。
内容
标语
Creating safe AGI that benefits all of humanity
Latest updates
保持着 5 天 2 更的速度,恐怖如斯。
Democratic inputs to AI grant program: lessons learned and implementation plans
更新于 20240116:
人工智能赠款计划的民主投入:经验教训和实施计划
We then awarded $100,000 to 10 teams out of nearly 1000 applicants to design, build, and test ideas that use democratic methods to decide the rules that govern AI systems.
We received nearly 1,000 applications across 113 countries. There were far more than 10 qualified teams, but a joint committee of OpenAI employees and external experts in democratic governance selected the final 10 teams to span a set of diverse backgrounds and approaches: the chosen teams have members from 12 different countries and their expertise spans various fields, including law, journalism, peace-building, machine learning, and social science research.
The projects spanned different aspects of participatory engagement, such as novel video deliberation interfaces, platforms for crowdsourced audits of AI models, mathematical formulations of representation guarantees, and approaches to map beliefs to dimensions that can be used to fine-tune model behavior.
How OpenAI is approaching 2024 worldwide elections
更新于 20240115:
How OpenAI is approaching 2024 worldwide elections
Our tools empower people to improve their daily lives and solve complex problems—from using AI to enhance state services to simplifying medical forms for patients.
Preventing abuse
We expect and aim for people to use our tools safely and responsibly, and elections are no different. We work to anticipate and prevent relevant abuse—such as misleading “deepfakes”, scaled influence operations, or chatbots impersonating candidates. Prior to releasing new systems, we red team them, engage users and external partners for feedback, and build safety mitigations to reduce the potential for harm. For years, we’ve been iterating on tools to improve factual accuracy, reduce bias, and decline certain requests. These tools provide a strong foundation for our work around election integrity. For instance, DALL·E has guardrails to decline requests that ask for image generation of real people, including candidates.
我们期望并致力于让人们安全、负责任地使用我们的工具,选举也不例外。我们致力于预测和防止相关滥用,如误导性的“深度伪造”、大规模影响力操作或冒充候选人的聊天机器人。在发布新系统之前,我们对其进行红色团队,让用户和外部合作伙伴参与反馈,并制定安全缓解措施,以减少潜在的危害。多年来,我们一直在迭代各种工具,以提高事实的准确性,减少偏见,并拒绝某些请求。这些工具为我们围绕选举诚信开展工作奠定了坚实的基础。例如,DALL·E 有护栏来拒绝要求生成真实人物(包括候选人)图像的请求。
Transparency around AI-generated content
Better transparency around image provenance—including the ability to detect which tools were used to produce an image—can empower voters to assess an image with trust and confidence in how it was made. We’re working on several provenance efforts. Early this year, we will implement the Coalition for Content Provenance and Authenticity’s digital credentials—an approach that encodes details about the content’s provenance using cryptography—for images generated by DALL·E 3.
We are also experimenting with a provenance classifier, a new tool for detecting images generated by DALL·E. Our internal testing has shown promising early results, even where images have been subject to common types of modifications. We plan to soon make it available to our first group of testers—including journalists, platforms, and researchers—for feedback.
Improving access to authoritative voting information
In the United States, we are working with the National Association of Secretaries of State (NASS), the nation’s oldest nonpartisan professional organization for public officials. ChatGPT will direct users to CanIVote.org, the authoritative website on US voting information, when asked certain procedural election related questions—for example, where to vote. Lessons from this work will inform our approach in other countries and regions.
We’ll have more to share in the coming months. We look forward to continuing to work with and learn from partners to anticipate and prevent potential abuse of our tools in the lead up to this year’s global elections.
Safety & responsibility
Our work to create safe and beneficial AI requires a deep understanding of the potential risks and benefits, as well as careful consideration of the impact.
Research
We research generative models and how to align them with human values
Products
Our API platform offers our latest models and guides for safety best practices.
DALL·E 3
DALL·E 3 understands significantly more nuance and detail than our previous systems, allowing you to easily translate your ideas into exceptionally accurate images.
Careers at OpenAI
Developing safe and beneficial AI requires people from a wide range of disciplines and backgrounds.