SME starter package
The first AI solution in 30 days
With our starter package, you can develop an AI-supported solution for your company within 30 days. After a self-assessment and a coordination meeting, we accompany you through four structured weeks:
Week 1: Basics and approaches
Week 2: Use cases and quick wins
Week 3: Create pilot and empower team
Week 4: Rollout, measuring success, next steps
Procedure
Non-binding inquiry via web
Request now, without obligation via our online form ->Anonymous online form for self-assessment
Queries about tool knowledge, team roles, expectationsCoordination and scheduling
meetings and on-site assignments during the 30 daysExpiry of the 30 days
-
Objective: clarity about benefits, risks and scope; green light for a limited pilot; basic enablement.
Checklist:
Define goals: select 2-3 measurable goals (e.g. speed up email creation by 30%, halve meeting notes, shorten research by 40%).
Draft simple AI usage guidelines: Data handling (no sensitive data in public tools), human-in-the-loop, transparency, source citation, escalation.
Release toolset: Microsoft 365 Copilot, Azure OpenAI or equivalent; clarify license and data framework with IT.
Define data sources: Permitted content for AI (guidelines, templates, intranet, product documents); exclude personal/sensitive data.
Designate roles: Executive sponsor, product owner, IT/security contact, 2-3 pilot champions.
Create risk register: Legal/data protection aspects (revDSG), protection of intellectual property, hallucination risks; define measures and release criteria.
Enable access: Activate pilot users, check Teams/SharePoint/OneDrive structure for secure use.
Mini templates:
AI usage policy (1 page): Purpose, permitted use, rules for sensitive data, review/updates, contact.
Pilot charter: problem, success criteria, scope, schedule, roles, risks, exit criteria.
-
Goal: Prioritized list of AI use cases and 1-2 productive quick wins.
Steps:
Record workflows: select 3 roles (e.g. sales, customer service, back office); list 5 repetitive tasks each.
Evaluate tasks: Impact (1-5) × Effort (1-5) × Risk (1-5); favor high impact, medium risk.
1 Select use case, e.g:
Compose/improve emails and offers
Summarize meetings, tickets or PDFs
Create product FAQs or knowledge articles
Build prompt templates for recurring tasks
Delivering quick wins:
Create a shared prompt library in OneNote or SharePoint
Standardize 5-10 prompts (inputs, tone, restrictions, examples, review checklist)
Document "human-in-the-loop" audits (facts, figures, legal language)
Example prompt template:
Goal: Formulate customer response with next steps
Inputs: customer email, our policy summary, desired tone: professional, friendly
Restrictions: max. 150 words, 3 bullet points with next steps, 1 query
Output format: Subject line + e-mail text
Review: Check facts, data and figures; no commitments without approval
-
Goal: A functioning pilot project with test users and basic training.
Examples for pilots:
Meeting summary assistant in teams with quality checklist
Document Q&A bot on policy PDFs (secure, limited corpus) with source citation
Sales e-mail/offer assistant with own templates and sound
Checklist:
Configure tools: Access, connectors, logging, storage
Create evaluation set: 10 real tasks per use case; define acceptance criteria
Test with 5-10 users: accuracy, time savings, satisfaction (CSAT 1-5), record problems
Training: 60-minute session on prompt patterns, dos/don'ts, examples, guidelines
Iterate: Improve prompt patterns, add guard rails, optimize instructions
Quality checks against hallucinations:
Provide source material; request citations
Narrow down the task and output format
Prefer retrieval to web search for internal company topics
Always check critical content (legal, financial, HR)
-
Goal: Controlled rollout to 20-50 users and data-based decision on scaling.
Checklist:
Rollout scope: 10-25 users in 1-2 teams; nominate champions per team
Active guard rails: guidelines, rules for sensitive data, clear "red lines", reporting channel for problems
Measurement:
Productivity: minutes saved per task, weekly time saved per user
Quality: acceptance rate of AI outputs, error rate
Adoption: Weekly active users, number of prompts per user
Feedback loop: Monthly office hours; improvement backlog with ROI estimate
Decision: Continue pilot, scale to other teams or pause and fix blockers
We also recommend our optional all-inclusive SME starter packages AI briefing for managerswhich enables managers to introduce AI responsibly and effectively or our package Working with generative AIpackage, which enables teams to use generative AI responsibly and effectively.