Microsoft's approach on combating abusive AI-generated content


  • Staff

 Microsoft On the Issues Blog:

Each day, millions of people use powerful generative AI tools to supercharge their creative expression. In so many ways, AI will create exciting opportunities for all of us to bring new ideas to life. But, as these new tools come to market from Microsoft and across the tech sector, we must take new steps to ensure these new technologies are resistant to abuse.

The history of technology has long demonstrated that creativity is not confined to people with good intentions. Tools unfortunately also become weapons, and this pattern is repeating itself. We’re currently witnessing a rapid expansion in the abuse of these new AI tools by bad actors, including through deepfakes based on AI-generated video, audio, and images. This trend poses new threats for elections, financial fraud, harassment through nonconsensual pornography, and the next generation of cyber bullying.

We need to act with urgency to combat all these problems.

In an encouraging way, there is a lot we can learn from our experience as an industry in adjacent spaces – in advancing cybersecurity, promoting election security, combating violent extremist content, and protecting children. We are committed as a company to a robust and comprehensive approach that protects people and our communities, based on six focus areas:
  1. A strong safety architecture. We are committed to a comprehensive technical approach grounded in safety by design. Depending on the scenario, a strong safety architecture needs to be applied at the AI platform, model, and applications levels. It includes aspects such as ongoing red team analysis, preemptive classifiers, the blocking of abusive prompts, automated testing, and rapid bans of users who abuse the system. It needs to be based on strong and broad-based data analysis. Microsoft has established a sound architecture and shared our learning via our Responsible AI and Digital Safety Standards, but it’s clear that we will need to continue to innovate in these spaces as technology evolves.
  2. Durable media provenance and watermarking. This is essential to combat deepfakes in video, images, or audio. Last year at our Build 2023 conference, we announced media provenance capabilities that use cryptographic methods to mark and sign AI-generated content with metadata about its source and history. Together with other leading companies, Microsoft has been a leader in R&D on methods for authenticating provenance, including as a co-founder of Project Origin and the Coalition for Content Provenance and Authenticity (C2PA) standards body. Just last week, Google and Meta took important steps forward in supporting C2PA, steps that we appreciate and applaud.

    We are already using provenance technology in the Microsoft Designer image creation tools in Bing and in Copilot, and we are in the process of extending media provenance to all our tools that create or manipulate images. We are also actively exploring watermarking and fingerprinting techniques that help to reinforce provenance techniques. We’re committed to ongoing innovation that will help users quickly determine if an image or video is AI generated or manipulated.
  3. Safeguarding our services from abusive content and conduct. We’re committed to protecting freedom of expression. But this should not protect individuals that seek to fake a person’s voice to defraud a senior citizen of their money. It should not extend to deepfakes that alter the actions or statements of political candidates to deceive the public. Nor should it shield a cyber bully or distributor of nonconsensual pornography. We are committed to identifying and removing deceptive and abusive content like this when it is on our hosted consumer services such as LinkedIn, our Gaming network, and other relevant services.
  4. Robust collaboration across industry and with governments and civil society. While each company has accountability for its own products and services, experience suggests that we often do our best work when we work together for a safer digital ecosystem. We are committed to working collaboratively with others in the tech sector, including in the generative AI and social media spaces. We are also committed to proactive efforts with civil society groups and in appropriate collaboration with governments.

    As we move forward, we will draw on our experience combating violent extremism under the Christchurch Call, our collaboration with law enforcement through our Digital Crimes Unit, and our efforts to better protect children through the WeProtect Global Alliance and more broadly. We are committed to taking new initiatives across the tech sector and with other stakeholder groups.
  5. Modernized legislation to protect people from the abuse of technology. It is already apparent that some of these new threats will require the development of new laws and new efforts by law enforcement. We look forward to contributing ideas and supporting new initiatives by governments around the world, so we can better protect people online while honoring timeless values like the protection of free expression and personal privacy.
  6. Public awareness and education. Finally, a strong defense will require a well-informed public. As we approach the second quarter of the 21st century, most people have learned that you can’t believe everything you read on the internet (or anywhere else). A well-informed combination of curiosity and skepticism is a critical life skill for everyone.

    In a similar way, we need to help people recognize that you can’t believe every video you see or audio you hear. We need to help people learn how to spot the differences between legitimate and fake content, including with watermarking. This will require new public education tools and programs, including in close collaboration with civil society and leaders across society.
Ultimately, none of this will be easy. It will require hard but indispensable efforts every day. But with a common commitment to innovation and collaboration, we believe that we can all work together to ensure that technology stays ahead in its ability to protect the public. Perhaps more than ever, this must be our collective goal.


 Source:

 

Attachments

  • Microsoft.png
    Microsoft.png
    306 bytes · Views: 0
Self-preservation of humanity at work, good prevails, can't be any different (y)
 

My Computer

System One

  • OS
    Windows 10 Pro

Latest Support Threads

Back
Top Bottom