New features from Microsoft set to help organizations detect risky usage of AI

Here are two new features from Microsoft which will enhance the detection of risky AI usage and generative AI interactions. Microsoft Purview Insider Risk Management is introducing new detections for risky AI usage. This update will enhance the ability of administrators to identify risky AI usage within their organizations. The new detections will cover both intentional and unintentional insider risk activities related to generative AI applications, including risky prompts containing sensitive information or intent and sensitive responses generated from sensitive files or sites. The detections will apply to M365 Copilot, Copilot Studio, and ChatGPT Enterprise, contributing to Adaptive Protection insider risk levels. Using IRM  administrators can gain insights into risky AI usage in an anonymized form using analytics, create policies to track risky prompts and sensitive responses, and use the new generative AI indicators in adaptive protection to assess user risk scores. Microsoft P...

An update to the "Assigned numbers" script

I've tested the "Assigned Numbers" script on 2013, and changed the layout somewhat. There is now a "Summary" section, listing how many endpoints you have with an assigned line URI.

Also, the script has been moved to the Technet Gallery, to keep better track of it's use and updates. You can find the script right here: http://gallery.technet.microsoft.com/Lync-numbers-in-use-6c890b9a

As always, I appreciate suggestions for improvements or feedback on bugs.