When AI is used in employment and education services, one question consistently arises before all others: Who has the right to use the data — and who decides how it is used?
Without trust, there is no functional service. Without transparent and understandable data use, there is no acceptable AI.
For this reason, data security and user agency are not add-on features in NeduAI. They are the foundation of the entire solution.
Core Principle: The User Owns Their Profile
Across all components — Työprofiili+, Smart Profile, Profile Analysis, and AI Assistant Pro — the same principle applies:
The user understands what data is stored about them, how it is used, and who has access to it.
In practice, this means:
- The user builds their profile themselves
- Profile data is not hidden or collected automatically in the background
- The user can edit, update, and supplement their information at any time
- The profile is not used outside a defined service context and purpose
The data does not belong to the system. It remains user-owned data that is used within a service.
Permission-Based Use, Not General-Purpose AI
In NeduAI, AI does not operate as a “free analyzer.”
Profile data is used:
- Only within clearly defined functionalities(such as Smart assistant chat, Profile Analysis, service recommendations, or expert support tools)
- Only to the extent required by the specific use case and the permissions granted by the user
- Always within the internal service context
This means:
- Profile data is not used to train general-purpose language models outside the service
- Data is not shared with third parties
- AI does not “learn users” outside the system
Data use is restricted, purpose-bound, and controlled.
How AI Is Allowed to Use Profile Data
AI may use profile data only under the following principles:
Context Dependence
The profile is accessed only when the user is actively in the relevant view or functionality.
Data Minimization
Only necessary data is made available to the AI — not the full profile.Personally identifiable information and contact details are stored separately from competence data.
No Automated Decision-Making
AI does not make binding decisions on behalf of the user. It produces suggestions and analyses.
Human Authority Is Preserved
The final decisions are always made by the user or the expert.
AI functions as support — not as a decision-maker.
The Expert Perspective: Rights and Responsibilities
In public services, experts can access user data only to the extent required for their professional role.
This includes:
- Role-based access control
- Logged and auditable actions
- Transparency regarding who accessed what data and when
AI Assistant Pro follows the same rules:
- It uses only data that the expert is authorized to access and that the user has made visible
- All actions are traceable
- AI does not expand expert permissions; it operates strictly within them
GDPR and EU-Based Data Processing
Profile data processing is based on:
- A clearly defined purpose
- User consent or a service relationship
- Data processing within the EU
Data is not transferred outside the EU and is not used for purposes outside the system.
Data protection is not an approval step for AI use — it is a structural starting point.
Why This Is Critical for Trust
Employment and education services deal with:
- People’s futures
- Professional identity
- Sometimes uncertainty and vulnerability
Such sensitive aspects are never stored as system data.
If users do not trust that:
- They understand how their data is used
- They can influence their own profile
- AI does not make decisions on their behalf
Then the service does not work — regardless of how advanced the technology may be.
Summary: Responsible AI Is a Prerequisite, Not a Competitive Advantage
Smart Profile, Smart assistant chat, Profile Analysis, and AI Assistant Pro all rest on the same foundation:
- The user owns their data
- Data use is purpose-bound
- AI supports but does not decide
- Expert responsibility remains intact
- Data security is built in by design
Responsible AI use should not be seen as an obstacle to progress. It is the condition that makes the adoption of AI in public services possible at all.



