Przejdź do treści
A person working on a laptop displaying an AI agent software for Grant Thornton interface with a plant in the background.

How we created the AI Assistant for Grant Thornton Poland, a global accounting and advisory firm

Grant Thornton Poland needed a faster way to handle expert audit questions. We built a custom AI Assistant that pulls answers from legal regulations, internal policies, and past cases, reducing response times and easing pressure on specialists. This case study shows how we did it, and what it means for teams looking to bring real, usable AI into corporate workflows.

Learn more about our project:

  • Client: Grant Thornton Poland – one of the world’s leading audit and advisory organisations. Present in Poland since 1993, employing over 1,000 people and serving over 2,400 clients annually. Globally recognised in 147 countries, with over 100 years of history and more than 68,000 employees.
  • Client Type: International corporation
  • Industry: Consulting, Finance, Fintech
  • Project Type: Digital Transformation, Corporate Innovation, AI Agent, UX Design
  • Scope of Work: AI application, communication, knowledge base
  • Methodology: Human-Centric AI Design
  • Project Goal: To develop an AI-powered tool designed to answer audit-related questions asked by employees. The AI Assistant uses data from legal regulations, internal company policies, and past resolved cases.
  • Key Outcome: A company-wide AI assistant with access to selected internal data sources (SharePoint, email, websites), based on Retrieval-Augmented Generation (RAG) architecture. The tool accelerates response time and improves document search efficiency.
  • Project Language: Polish
  • Partners: 10senses (technology partner), Grant Thornton Poland
  • Duration: 2024 – 2025
  • EDISONDA Team: Mateusz Jędraszczyk – AI & Automation design Lead, Marek Zieliński – AI Backend Developer, Mateusz Biesiadowski – AI & Fullstack Developer, Łukasz Borowiecki – Data Scientist, Irina Jackiewicz – Project manager

Why the client contacted us – issues, challenges and early indicators

The technical team of the Audit Department at Grant Thornton Poland was struggling with a growing number of expert inquiries, which were difficult to handle with a limited number of staff. We defined the goal as creating a tool that would enable more efficient responses to auditors’ specific questions, reducing the time spent searching for information.

The results of the collaboration – what defines the success of the project

Our project delivered tangible results with real business impact:

  • A company-wide AI assistant with access to selected internal data sources (SharePoint, email, websites), based on Retrieval-Augmented Generation (RAG) architecture.
  • A satisfactory level of correct knowledge retrieval at the Proof of Concept stage.
  • Reduced response time to auditors’ questions by:
    – speeding up document search across distributed data sources,
    – intelligent reasoning and autonomous answering by the AI Assistant,
    – a useful and personalized user interface that supports working with questions and controlling the AI Assistant’s responses.

What’s the difference between an AI Assistant and an AI Agent?

AI Assistant – supports the user, answers question, acts on demand. AI Agent – performs autonomous tasks, makes decisions independently.

How the collaboration unfolded – stages, methods and lessons learned

We have gone through the first four key steps.

Stage 1: Understanding the problem

We started with Process Discovery Workshops, sessions during which we examined the selected process step by step. We investigated how long each task took and what could be getting in the way. This helped us better understand where automation would bring the most value.

We assessed each element of the process based on the following criteria:

  • How many people will benefit from the new solution? (i.e., what is the potential impact?),
  • Usefulness of the solution (on a scale of 1–10),
  • Technological feasibility (on a scale of 1–10),
  • Value for the user (on a scale of 1–10).

At the end, we selected the ideas that scored the highest. These became the foundation for our solution.

What exactly is Process Discovery Workshop?

This is our own way of analysing processes. Like data mining, we look for hidden patterns in the data. We then go further and look at what people are doing, not just the data in the system. Sometimes the problem is something as simple as putting information on a piece of paper rather than in the system. Our process allows us to identify this and find ways to improve.

What’s more, this evaluation allowed us to look at the entire process methodically and more objectively. We discarded ideas that:

  • exceeded the current capabilities of available AI tools,
  • were likely to produce unreliable results,
  • could be solved using simpler and more cost-effective methods.

For example: instead of developing a complex AI model, we redirected some of the more challenging queries directly into the ticketing system, where the right specialists could take over.

Stage 2: How not to build AI for the sake of AI

Many companies believe that simply adding AI to a process will magically solve their problems. It’s a tempting vision, but far from reality. AI is just a tool for the people who will be using it every day. If we don’t take their needs into account, we risk building a system that doesn’t address real challenges and will be quickly abandoned.

That’s why, as part of the Process Discovery Workshops, we started with a key question: who are we building this solution for? Who are the users?

We identified two main user groups who would be working with the AI Assistant:

  • Technical Department Specialist – currently responsible for answering auditors’ questions by analyzing data and providing the necessary information.
  • Audit Specialist – the person who asks the questions and makes decisions based on the answers received.

These two personas have different experiences, needs and expectations of the tool. So we had to make sure that the AI assistant really supported them, rather than adding new layers of complexity.

This raised another important question: do we have the right data?

AI needs a solid foundation: high-quality data. If the system learns from incorrect or incomplete information, it becomes useless.

So, we analysed the available knowledge sources, such as:

  • team emails,
  • internal websites,
  • external sites with regulations,
  • PDF documents (e.g. legal acts and guidelines).

Emails turned out to be a particularly interesting case. We knew that many valuable answers to auditors’ questions were buried there. But how do you separate the useful ones from everything else?

We asked our client’s Subject Matter Experts (SMEs) to select the most essential materials.

Out of more than 2,000 emails, 217 were chosen for their valuable content. These became one of the knowledge foundations for the AI Assistant.

It was a difficult but necessary step. It not only helped organise the knowledge base but also ensured that the AI would rely on trustworthy sources.

Who is a Subject Matter Expert?

A Subject Matter Expert is someone with deep experience in a specific area of knowledge. It could be a law professor or a experienced software engineer.

To finalise the project scope, we described the tasks in a backlog. We used the User Story method to do this. The result was a list of tasks created by a Product Design specialist supported by Artificial Intelligence.

This allowed us to quickly define key acceptance criteria — control points later used in manual testing.

What is a backlog?

A backlog is a task list organised by importance and associated with specific deadlines.

What are User Stories?

A User Story is a short sentence that describes what a user wants to achieve and why. It’s written in a specific format: “As a [role], I want [goal], so that [motivation].” User Stories help clarify what we’re building together — without requiring technical knowledge. They’re often used as a communication bridge between technical and non-technical team members.

An example User Story is shown below.

Stage 3: Design – putting Intelligent Interfaces into practice

After defining the personas and verifying available data sources, it was time for the next step: designing the interface and user experience.

As part of testing the concept, we focused on two main objectives:

User experience

  1. Will the chat interface be intuitive and aligned with the brand image?
  2. Do we have all the essential functionalities required by users for this specific AI use case?
  3. Do we know which channels are best for interacting with the AI Assistant (Microsoft Teams, standalone interface, etc.)?
  4. What are the technological possibilities and limitations tied to specific tools (e.g., using Microsoft Copilot Studio vs. a fully customized solution – which is what we ultimately chose)?

Conversation experience with the AI

  1. What tone should the AI use in its responses?
  2. What should be the structure of the AI’s answers? (We implemented features such as AI-suggested follow-up questions.)
  3. How can we build real trust in AI responses? – for example, we introduced a feature that shows the steps taken by the AI before delivering its answer.
  4. What options should be available in the knowledge source preview, and what should the context be for text snippets used by the AI?

We decided on a visual prototype, an interactive mock-up that allowed users to ‘chat’ with a simulated version of the AI. It wasn’t a fully functional assistant yet, but it allowed us to test key interface components.

In parallel with UX testing, we began the technical analysis. The key questions included:

  • Which solution will work best in our environment?
  • What security standards do we need to meet?
  • What development environment will be optimal?

As part of the Proof of Concept (PoC), we focused on quickly testing a basic version of the system. To shorten implementation time as much as possible, we used pre-built UI components. This allowed us to concentrate on the most important part: the AI’s performance.

Even the initial tests showed that the interface was well received, and users felt comfortable using it. That gave us the green light to continue work on the final solution.

Stage 4: Testing the AI Assistant – closed Beta Tests

Once the design phase was complete, it was time for hands-on testing of the AI Assistant. In this phase, our goal was to verify how well the AI handled information search and processing, and how users rated the quality of its responses.

We divided the testing into three key areas:

  1. Automated tests
  2. Semi-automated tests
  3. Qualitative tests with specialists (SMEs – Subject Matter Experts)

Automated tests

The first stage involved automatic analysis of data sources. The AI evaluated each source for usefulness and quality, assigning a score from 1 to 5.

  • Sources scoring above 2.5 were automatically added to the assistant’s knowledge base.
  • Sources scoring below 2.5 were either rejected or flagged for further review.
  • Each score included a justification, enabling better understanding of the AI’s decisions and adjusting if needed.

This filtering system allowed us to eliminate low-quality data even before beginning user testing.

Semi-automated tests

At this stage, we focused on two aspects:

  1. Is the AI correctly retrieving information from the knowledge base?
  2. Are the responses aligned with expectations?

To verify this, we didn’t yet involve domain experts. Instead, we used a semi-automated testing approach:

  • The project team asked the AI questions based on the selected documents.
  • We used state-of-the-art AI models (e.g., Claude) to generate questions and reference answers, which allowed us to assess how closely the assistant’s responses matched the expected results.
  • In cases of incorrect responses, we flagged them for improvement to better tailor the model to users’ specific needs.

This approach allowed us to detect early issues before the assistant was exposed to end-user testing.

Qualitative tests with Subject Matter Experts (SMEs)

In the next phase, we deployed the AI in a controlled testing environment (a so-called “sandbox”). Here, domain experts were free to ask questions and assess the quality of responses.

Experts evaluated the AI based on:

  • Was the response factually accurate?
  • Did the AI correctly search the knowledge base?
  • Could the AI respond to real-world questions previously asked by clients (e.g., based on historical emails)?

What are Alpha and Beta tests?

These are common terms used to describe stages in product testing. Alpha tests typically refer to closed internal testing (e.g., among company employees or within a specific department). Beta tests usually expand access to selected end users from the target group.

What’s next? Phase 2 of the project ahead

The second phase of the project involves extending the tests to around 200 users. This will allow us to more accurately assess the effectiveness of the AI assistant depending on the complexity of the questions and the role of the individual user.

In this phase, we’ll focus on:

  • New document formats: the AI Assistant will be able to analyse documents such as contracts.
  • Extended conversation context: enabling the AI to refer to previous parts of the conversation.
  • Improved response accuracy: model optimisation and algorithm refinement.
  • Automated knowledge base updates: integration with external websites.
  • Enhanced interface features: such as the ability to save and reuse previously asked questions.

Key Takeaways

  • The effectiveness of AI depends on real user needs. These should be thoroughly analysed before implementation.
  • AI is not a one-size-fits-all solution. It requires careful design, just like traditional systems.
  • Dedicated solutions are essential, simply replicating popular chat interfaces is not enough.
  • Awareness of AI’s capabilities and limitations within the team facilitates collaboration and optimization.
  • Expert validation (by SMEs) is crucial for maintaining quality and user trust.

A key element of the process was assembling an agile team with previous experience in designing and building AI solutions. This allowed us to bypass the slow process of learning AI fundamentals and reduce project management overhead. While the collaboration was driven by a team of interdisciplinary AI professionals, the roles themselves can be considered ‘classic’:

  • Subject Matter Experts (SMEs): individuals with deep knowledge of the project’s domain — specialists who supported the team through AI Assistant testing.
  • Chief Design Officer: primarily responsible for client communication and providing necessary resources.
  • Lead AI Product Designer: designed AI solutions with a focus on user needs.
  • AI Backend Developer: selected the right AI model, built the repository, vector database, and virtual machine, and ensured data processing security.
  • AI & Fullstack Developer: responsible for prompts, instructions, and execution steps (carried out by the AI before delivering the final answer), as well as frontend development.

Find out where AI can really help your team. Lat's talk

    Do you wish to receive the latest information related to the topics of business and innovation design, as well as information about Edisonda's activities, projects, and offers?

    Please select the channel through which we can contact you (consent is voluntary):

    Information provided in the form will be used only in order to reach back to you. Contents of the correspondence might be archived. More details could be found in our privacy policy.

    Michał Madura
    Senior Business Design Consultant

    +48 505 016 712
    michal.madura@edisonda.pl

    Privacy Overview

    This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.