Tomorrow, Microsoft Free A new report, “Governing AI: A Blueprint for the Future”, detailing five guidelines that governments should consider when developing policies, laws and regulations around AI. The report also attempts to highlight AI governance within Microsoft.
The company has been at the forefront of the AI craze, raining AI updates into its products, backing OpenAI’s viral ChatGPT, adding its own Bing bot with features like images and videos, and the bot’s infamous Despite the trend removing the waiting list for its full release. To “mislead” or create false information.
Microsoft sees even greater promise from AI – new cures for cancer, new insights about proteins, climate change, defending against cyber attacks, and even protecting human rights in countries suffering civil wars or foreign invasions.
Progress ain’t stopping, but the push to regulate AI is now in full swing As regulators around the world launch investigations And technology is cracking down.
“It’s not enough to focus only on the many opportunities for using AI to improve people’s lives,” Brad Smith, president of Microsoft, said in the report, adding social media, for example, as one such tool. about which technologists and political commentators insisted. But five years later, “had become both weapon and tool – in this case for the purpose of democracy itself.”
deepfakeThe alteration of existing content or the creation of entirely new content nearly indistinguishable from “reality,” Smith said, is the greatest danger from AI. For example, we saw, a few months ago, the mass circulation of a synthetic video of US President Joe Biden sparking transphobic discourse.
But combating these emerging ills of AI shouldn’t be the sole responsibility of tech companies, Smith said, as he asked “how do governments ensure that AI is subject to the rule of law?” and “What form should new law, regulation and policy take?”
Here are five guidelines for controlling AI, according to Microsoft:
- Apply and build on the successes of existing and new government-led AI security frameworks, in particular, AI Risk Management Framework Accomplished by the US National Institute of Standards and Technology (NIST). Microsoft proposed the following four suggestions for building on that framework.
- Create safety brakes for AI systems that control the operation of specified critical infrastructure. They would be similar to the braking systems that engineers have built into elevators, buses, trains, etc. With this approach, the government would classify AI systems that control critical infrastructure as high-risk and that could guarantee a break, mandating operators to implement them. brakes, and make sure they are in place before deploying the infrastructure.
- Create a new legal framework that reflects the technology architecture for AI. To that end, Microsoft included details about the critical pieces that go into building a generative AI model and proposed specific responsibilities to be applied across three layers of the technology stack – the application layer, the model layer, and the infrastructure layer. Are. For the application layer, people’s security and rights will be the priority. The model layer will require regulations involving licensing for these models and the infrastructure layer will include obligations for the AI infrastructure operators on which these models are developed and deployed.
- an annual AI Transparency Report, and expanded access to AI resources for the academic research and non-profit community. The report states that scientific and technological inquiry will be affected until academic researchers can access more computing resources.
- Advance public-private partnerships to use AI to help address social challenges
Microsoft also explained what the company is doing internally to control AI, noting that around 350 of its employees are working on responsible AI. It added that it has developed ethical principles over the past six years that have translated into specific corporate policies spanning training, tooling and testing of systems. Additionally, the company said it has completed reviews of nearly 600 sensitive use cases since 2019.