The Microsoft Racing team to catch bugs before they happen

like a rush As cybercriminals, hackers and state-backed scammers continue to flood the area with digital attacks and aggressive campaigns around the world, it’s no surprise that the maker of the ubiquitous Windows operating system is focuses on defending security. Microsoft Patch Tuesday updates frequently contain fixes for critical vulnerabilities, including those actively exploited by attackers around the world.

The company already has the groups needed to find weaknesses in its code (“the red team”) and develop mitigation measures (“the blue team”). But recently, this format has evolved again to promote more collaboration and cross-disciplinary work in hopes of catching even more mistakes and flaws before things start to get bad. Known as Microsoft Offensive Research & Security Engineering, or Morse, the department combines the Red Team, the Blue Team, and the so-called Green Team, which focuses on finding flaws or picking up weaknesses the red team found them and fixed them in a more systemic way by changing the way things are done within an organization.

“People are convinced that you can’t move forward without investing in security,” said David Weston, vice president of enterprise and operating systems security at Microsoft, who works in the business for 10 years. “I have been safe for a very long time. For most of my career, we were considered boring. Now, if anything, the executives come up to me and say, ‘Dave, are you okay? Did we do all we could? It was a significant change.

Morse worked to promote safe coding practices within Microsoft so that fewer bugs end up in the company’s software in the first place. OneFuzz, an open-source Azure testing framework, allows Microsoft developers to be constantly, automatically, bombarding their code with all sorts of unusual use cases to discover flaws that wouldn’t be noticeable if the software didn’t. was used only as intended.

The combined team has also been at the forefront of promoting the use of safer programming languages ​​(like Rust) across the company. And they also advocated embedding security analysis tools directly into the actual software compiler used in the enterprise workflow. This change has had an impact, Weston says, because it means developers aren’t doing what-if analysis in a simulated environment where certain bugs might be overlooked at a stage removed from actual production.

The Morse team says the move to proactive security has led to real progress. In a recent example, Morse members were reviewing legacy software, an important part of the group’s work, since much of the Windows codebase had been developed before these extensive security reviews. While investigating how Microsoft implemented Transport Layer Security 1.3, the foundational cryptographic protocol used on networks like the Internet for secure communication, Morse discovered a remotely exploitable bug that could have allowed attackers to gain access to targets’ devices.

As Mitch Adair, Microsoft’s chief security officer for Cloud Security, said, “That would have been as bad as it gets. TLS is used to secure virtually every service product Microsoft uses.”

Previous UCSC student named one of the top 50 hackers of 2022
Next Nintendo's profits drop after supply chain issues hamper Switch console production