Skip to main content
Version: 25.10 (Latest)

Apps

The Apps dashboard offers a quick view of your organization's GenAI app landscape. It shows how many AI apps are in use, categorized by their security risk level, and highlights the most frequently used AI applications. This dashboard helps you quickly identify trending usage and pinpoint high-risk apps that may require immediate attention.

AI Apps Usage

This panel provides an overview of the total number of GenAI applications used within your organization, categorized by their assessed risk profile over time.

It uses a bar graph to display the number of AI apps. Each bar is divided into segments that display the total apps in each Risk Profile category:

  • Critical
  • High
  • Medium
  • Low
  • Very Low

You can view the total number of apps used, for example, "Total apps used 665". The graph shows trends across different date ranges, for example, "Jun 14 - Jun 21", "Jun 21 - Jun 28", "Jun 28 - Jul 5".

Mouse over the bars to see specific counts for each risk profile within a given period. This panel does not offer drill-through capability to view events or incidents directly from its elements.

Most Used AI Apps

This panel lists the most frequently used GenAI applications, highlighting key metrics for each. It helps identify popular apps within your environment and their associated risk levels.

It presents data in a tabular format, displaying a list of AI applications ranked by their risk profile.

The table includes the following columns:

  • AI App The name of the GenAI application, for example, "PhishingGPT", "AdCreative AI".
  • Risk Profile: The assessed security risk level for the app, for example, Critical, High, Medium.
  • Users: The number of unique users who have accessed the app.
  • Data: The volume of data associated with the app's usage, for example, "239.0 MB".
  • Usage Over Time: A small line graph showing the trend of the app's usage over the displayed period.

Drill-down to Risk Assessment

You can click on an individual AI App, for example, "PhishingGPT", to drill down and view its detailed AI Summary of the Risk Assessment. This detailed view provides a Risk Summary (overall assessment), breaks down the risk by categories (e.g., Data Sensitivity & Security, Model Security Risks), and includes a View Full Assessment link for even more detail.

Sort and Filter

You can sort the table by Users and Data, and filter by Risk Profile.

Understanding the AI App Risk Attributes

When you drill down into a specific AI app from the "Most Used AI Apps" panel, you see a detailed risk assessment view. This view provides a comprehensive breakdown of why an app has a certain risk level.

Detailed Assessment View

This side panel presents a granular security assessment for the selected GenAI application. It aims to explain the app's overall risk profile and the factors contributing to it.

This view includes the following:

  • App Name & Overall Risk: The name of the AI app, for example, "PhishingGPT", and its overall assessed risk level, for example, "Critical".
  • Risk Summary: An AI summary explaining the overall risk associated with the app.
  • Risk Level Indicator: A visual scale, "Very Low" to "Critical", with a pointer indicating the app's current overall risk.
  • View Full Assessment Link: A link to access the complete, detailed assessment, which includes all the security factors across the five risk attribute categories.

Risk Attribute Categories

The assessment breaks down the app's risk across five key categories. Each category has its own assessed risk level and can be expanded to show more details.

  1. Data Sensitivity & Security: This category assesses how the AI application handles user inputs and sensitive data. It looks at aspects like whether the AI retains, stores, or learns from user inputs, potentially exposing sensitive information. It also considers data leakage risks, storage practices (encryption), data retention policies, and if the app shares data with third parties.

  2. Model Security Risks: This category evaluates vulnerabilities related to the AI model itself. It includes checks for known vulnerabilities (e.g., against OWASP Top 10 AI scans), supply chain security (third-party packages, pre-trained models), and defenses against model poisoning attacks or unbound consumption.

  3. Compliance & Regulatory Risks: This category assesses the app's adherence to relevant data privacy regulations such as GDPR, CCPA, HIPAA. It also looks at copyright and intellectual property measures, clarity of Terms of Service, and whether the provider has recognized security certifications like SOC 2 Type II, ISO 27001.

  4. User Authentication & Access Controls: This category evaluates how users authenticate to the app, whether or not it adheres to strong password policies, multi-factor authentication, and the effectiveness of its access controls. It checks if users can download or share generated content, restrictions on input data types, and API security (rate-limiting, authentication).

  5. Security Infrastructure & Practices: This category assesses the provider's overall security posture and operational practices. It includes vulnerability management processes, incident response plans, and the observability/auditability of data, for example, user query logging, log encryption, and access controls.

By expanding each of these categories, you can see the specific factors that contribute to the overall risk assessment of a GenAI application.