OpenAI has announced the integration of its advanced image-generating model, DALL·E 3, into ChatGPT Plus and Enterprise. This move aims to offer users a more interactive and visually engaging experience by generating high-quality images based on textual prompts. The integration is designed to benefit a wide range of applications, from content creation to business presentations.
The company has implemented a multi-layered safety system to restrict DALL·E 3 from producing potentially harmful content, such as violent or adult imagery. This safety system scrutinizes both the user prompts and the generated images before they are shown to users. OpenAI has also collaborated with early users and expert evaluators to identify and rectify any shortcomings in its safety mechanisms. This process has helped the company to recognize edge cases where the model might generate inappropriate content, like sexually explicit images, and to fine-tune its ability to produce misleading visuals.
According to OpenAI’s blog post, in preparation for the wide deployment of DALL·E 3, OpenAI has taken measures to reduce the likelihood of the model generating content that mimics the style of living artists or portrays public figures. The company has also worked on improving the demographic representation in the images generated by the model. For more details on the preparations for DALL·E 3’s deployment, OpenAI refers users to the DALL·E 3 system card.
OpenAI emphasizes the importance of user feedback for continuous improvement. Users of ChatGPT can report any unsafe or inaccurate outputs by using a flag icon, which will inform the company’s research team. OpenAI believes that listening to a broad and diverse user base is crucial for responsible AI development and deployment.
The company is also in the early stages of evaluating a new internal tool known as a “provenance classifier.” This tool is designed to identify whether an image has been generated by DALL·E 3. Preliminary internal tests show that the classifier is over 99% accurate in identifying unmodified images generated by DALL·E 3, and retains over 95% accuracy even when the image has undergone common modifications like cropping or resizing. While the classifier is promising, it is not yet definitive in its conclusions. OpenAI expects this tool to be part of a broader set of techniques aimed at helping people understand if audio or visual content is AI-generated.
OpenAI anticipates that the provenance classifier will be a valuable asset in the ongoing challenge of determining the origins of AI-generated content. This is a challenge that the company believes will require collaboration across various stakeholders in the AI industry.
Featured Image via Unsplash