{"id":25347,"date":"2026-03-11T08:00:02","date_gmt":"2026-03-11T07:00:02","guid":{"rendered":"https:\/\/www.hybridforms.net\/en\/?p=25347"},"modified":"2026-03-12T15:02:17","modified_gmt":"2026-03-12T14:02:17","slug":"trusted-ai","status":"publish","type":"post","link":"https:\/\/www.hybridforms.net\/en\/trusted-ai\/","title":{"rendered":"Trusted AI \u2013 Sovereign AI for Public Authorities, KRITIS & Regulated Organizations"},"content":{"rendered":"
Artificial intelligence is changing the way organizations capture data, process it, and make decisions. But for the public sector, security authorities, and critical infrastructures, one principle takes precedence over all efficiency: trust comes before speed. HybridForms responds with Trusted AI \u2013 a concept that combines AI capability with absolute control over data, infrastructure, and decision-making processes.<\/strong><\/span><\/p>\n<\/div><\/div><\/div><\/div><\/div> Trusted AI describes an AI concept in which every function is explainable, controllable, and fully compliant.<\/strong> No black box, no uncontrolled data flows to external services, no structural dependency on American or Asian platform providers.<\/p>\n<\/div> In practice, this means: AI models and AI inference are operated entirely within the organization\u2019s own infrastructure \u2013 on-premises, in a private cloud, or in a sovereign European hosting environment. Data control remains with the operator. For highly regulated domains such as law enforcement, government agencies, healthcare, and critical infrastructure operators, this is not an option but a fundamental prerequisite: operational and citizen data must never leave the controlled environment at any time.<\/p>\n<\/div><\/div><\/div><\/div><\/div> HybridForms implements Trusted AI through two core technical pillars:<\/p>\n<\/div> Purely administratively managed configuration profiles control which (detailed inference) AI functions are activated in which context.<\/strong> Privacy levels, models, and output filters are defined centrally \u2013 and cannot be viewed (and therefore not changed) by individual users or even process designers. Privacy is thus enforced, not expected on a trust basis.<\/p>\n<\/div> The integrated AI Services Broker is the heart of the Trusted AI architecture \u2013 an intelligent control center between HybridForms and the AI models in use.<\/strong> This function is highly administrative and accessible only to users with the highest security clearance. It manages routing, logging, access rights, and the selection of permitted endpoints \u2013 both internal and external. Administrators thus retain full oversight and control over the entire AI deployment within an application at all times. Crucially: different AI models can be operated in parallel and selected according to context.<\/strong> If the organization switches AI providers or introduces new models, HybridForms continues to run without interruption. No adjustments to forms, processes, or workflows are necessary.<\/p>\n<\/div> Audit trail & explainability<\/strong> Role-based AI governance<\/strong> The major American cloud providers have democratized AI as a service. For organizations operating under GDPR, BSI IT-Grundschutz, NIS2, or sector-specific regulations, however, outsourcing sensitive data to external clouds is often legally impermissible \u2013 and always a loss of sovereignty.<\/p>\n<\/div> \u00bbAnyone who does not know and cannot control their AI infrastructure does not know the risks either \u2013 and cannot take responsibility for them.\u00ab<\/strong> Martin Bene, CTO and Managing Director of icomedias, on the HybridForms Trusted AI principle <\/span><\/em><\/p>\n<\/div> HybridForms is designed so that AI models and inference are operated entirely on dedicated servers, in dedicated data centers, or in sovereign European cloud environments.<\/strong> No data leaves the organization in an uncontrolled manner.<\/p>\n<\/div><\/div><\/div><\/div><\/div> Trusted AI in HybridForms is aimed at organizations where data breaches or uncontrolled AI use would have existential consequences:<\/p>\n<\/div> Public sector & government agencies:<\/strong> Administrations, ministries, and public offices operate under strict data protection law and statutory accountability requirements. Trusted AI enables AI support in mobile form processes, applications, and procedures \u2013 without risk to citizen data.<\/p>\n<\/div><\/li> Law enforcement & security authorities:<\/strong> Operational documentation, investigation data, situation reports, and online criminal complaints are highly confidential and subject to the strictest data protection and security requirements. AI-assisted processes may only take place in fully shielded environments \u2013 Trusted AI is designed for precisely this purpose.<\/p>\n<\/div><\/li> Critical infrastructure (KRITIS):<\/strong> Organizations such as energy suppliers, transport infrastructure operators, and healthcare facilities are subject to the highest requirements for resilience and data protection. AI must be fail-safe and fully operable locally.<\/p>\n<\/div><\/li> Large enterprises & regulated industries:<\/strong> Large corporations in pharmaceuticals, finance, and industry face comparable requirements: IP protection, regulatory compliance, and control over proprietary data are non-negotiable.<\/p>\n<\/div><\/li><\/ul><\/div><\/div><\/div><\/div> Trusted AI in HybridForms is not an autonomous system \u2013 it is an assistive tool that provides targeted support to specialists without replacing their oversight and approval responsibilities.<\/strong> AI functions are exclusively requested actively and in direct processing context: the professional user reviews, evaluates, modifies, and approves. Automated exclusions or independent decisions are conceptually excluded.<\/p>\n<\/div> This concept is designed to meet the requirements for human oversight and control under the EU AI Act: AI acts exclusively in the directly requested processing context by trained personnel \u2013 without autonomous decisions, without dynamic suggestion mechanisms, and fully operable on dedicated servers.<\/p>\n<\/div> In practice, a range of assistive functions are available directly within the form and workflow context, for example:<\/p>\n<\/div> Textual image description:<\/strong> AI automatically describes photos and videos as structured text input \u2013 without manual transcription. Ideal for comprehensive documentation in field operations, inspections, or case processing.<\/p>\n<\/div><\/li> Image analysis for hazards & damage:<\/strong> AI identifies safety-relevant features in photos and videos \u2013 such as cracks in load-bearing structures, hazardous material classes, fire loads, or injury patterns. This reduces overlooked anomalies, accelerates initial assessment, and produces legally sound documentation \u2013 in accident reports, infrastructure inspections, and KRITIS site visits.<\/p>\n<\/div><\/li> Audio & video transcription:<\/strong> Spoken content is automatically converted to text \u2013 based on locally operated language models, without data transmission to external services. Applicable for dictated situation reports or evidence uploaded by citizens via the online police portal.<\/p>\n<\/div><\/li> Summarization of documents & media:<\/strong> Extensive documents, reports, and media files are automatically condensed into structured summaries. The result is a proposal \u2013 control and approval remain with the responsible case officer.<\/p>\n<\/div><\/li> Anomaly & consistency checking:<\/strong> AI detects inconsistencies and anomalies in form data and evidence \u2013 such as contradictory statements in case descriptions. This supports quality assurance and review processes without replacing human judgment.<\/p>\n<\/div><\/li> Translation support:<\/strong> Foreign-language texts, complaints, and documents are translated on a non-binding basis \u2013 as a working aid for back-office case officers. Responsibility for legally relevant translations remains with qualified personnel.<\/p>\n<\/div><\/li> Flagging of potentially dangerous content and circumstances:<\/strong> AI detects and marks content and circumstances prior to detailed review by specialists \u2013 and presents this content for examination with elevated priority. The primary target use case includes immediate police action requirements.<\/p>\n<\/div><\/li><\/ul> All functions operate within the security architecture of the AI Services Broker<\/strong> \u2013 seamlessly integrated for the user, fully controlled for the administrator, and granularly configurable via Secure AI Presets. The broker layer is administered at the highest level and is not accessible even to tenant administrators.<\/p>\n<\/div><\/div><\/div><\/div><\/div> European AI regulation is taking shape. The EU AI Act classifies AI systems in the areas of public safety, critical infrastructure, and law enforcement as high-risk systems with far-reaching requirements for transparency, documentation, and human oversight.<\/p>\n<\/div> HybridForms Trusted AI is designed from the ground up for these requirements.<\/strong> Audit trail, granular AI governance via the AI Services Broker, and complete local data storage form the technical foundation for demonstrable compliance \u2013 not as a retroactive effort, but as an integral part of the product design. Security by Design, Privacy by Design, and Compliance by Design<\/strong> are not marketing promises but architectural principles embedded in every layer of the platform.<\/p>\n<\/div> Organizations can thus demonstrate: AI systems are under human control, data is not processed in an uncontrolled manner, and decision-making processes in mobile form processes as well as back-office workflows are fully documented.<\/p>\n<\/div> Supported Regulatory Frameworks (specific to implementation):<\/strong><\/p>\n<\/div> Digital sovereignty is no longer an ideological position \u2013 it is a strategic necessity. Organizations that have made critical infrastructure and processes dependent on a few global providers are increasingly experiencing the fragility of this dependency: geopolitical shifts, data scandals, or changed terms of service by major platforms.<\/p>\n<\/div> HybridForms implements digital sovereignty technically and architecturally \u2013 equally in mobile form processes, in complex review and approval workflows, and in the AI layer.<\/strong> Those who wish to deploy AI without relinquishing control over their own data will find in HybridForms a platform that consistently delivers on this promise.<\/p>\n<\/div> Made for Europe<\/strong> is a design decision: for European data protection law, European transparency and accountability standards, and European technical norms. In a world where AI is becoming part of critical infrastructure, trust matters more than any question about the range of features.<\/p>\n<\/div><\/div><\/div><\/div><\/div>What is Trusted AI?<\/h2><\/div><\/div><\/div>
The Security Model: Secure AI Presets & AI Services Broker<\/h2><\/div><\/div><\/div>
HybridForms.AI Secure Presets<\/h3><\/div>
HybridForms.AI Services Broker<\/h3><\/div>
\nEvery AI-assisted action in mobile form processes and digital workflows is comprehensively documented and traceable. For public authorities and regulated industries, this traceability is an indispensable compliance requirement.<\/p>\n<\/div>
\nAI access and AI permissions follow the existing role and permission model of HybridForms. Field personnel, back-office clerks, and administrators receive precisely the AI support that corresponds to their function and security level.<\/p>\n<\/div>
<\/span><\/div><\/div><\/div><\/div><\/div>On-Premises instead of Hyperscalers \u2013 Data Sovereignty as Standard<\/h2><\/div><\/div><\/div>
Target Groups: Who Benefits from Trusted HybridForms.AI<\/h2><\/div><\/div><\/div>
Assistive AI Functions in Practice: Mobile Forms & Workflows<\/h2><\/div><\/div><\/div>
Compliance by Design: Security, Privacy & EU AI Act<\/h2><\/div><\/div><\/div>
Made for Europe: Digital Sovereignty<\/h2><\/div><\/div><\/div>