Hello Abdul Rehman,
Thank you for reaching out on the Microsoft Q&A.
The issue you’re experiencing is caused by Azure’s default content filters, which apply medium-severity thresholds to all deployments. These defaults can sometimes block legitimate inputs especially when the text contains personal information or sensitive-looking data.
To address this, Azure allows you to create your own custom content filters. In Azure AI Foundry, you can go to Guardrails + controls → Content filters and create a filter where you adjust the severity levels for each safety category. After creating the filter, you can attach it to the specific model deployment. This usually helps reduce unnecessary blocking in scenarios where your application deals with real user data.
However, it’s important to note that the default system filter still applies for most customers. Unless you have special approval, you cannot fully override or disable the underlying default filter. Azure requires customers to apply for “modified content filtering” if they need deeper control, such as more relaxed filtering or annotation-only behavior.
If your scenario involves frequent PII or sensitive inputs, the recommended approach is to first configure a custom filter and see if it resolves your issue. If not, then you should apply for modified content filtering as described in the official documentation. Without that approval, the default filter may continue blocking certain types of content.
Reference:
Configure content filters - Azure OpenAI | Microsoft Learn
There is an option to configure content filters, and if you are eligible, you can apply to modify these filters as mentioned on the page.
I Hope this helps. Do let me know if you have any further queries.
Thankyou!