FAQs: Benchmarking with AI: Challenges and Successes

 

After our successful interactive transfer pricing webinar : Benchmarking with AI: Challenges and Successes, this blog post answers the questions asked by our attendees that we couldn’t get to during the session.

Are there any collaboration features within TPbenchmark's AI tools, especially when multiple team members are involved in the testing process?

Absolutely! TPbenchmark supports collaboration by providing multiple users access to the system at the same time. Collaboration together on a benchmark with the use of AI makes it fairly easy to now benchmark. The AI tool is designed to enhance teamwork and efficiency.

What kind of support and training is available for users transitioning to AI-driven testing in TPbenchmark?

Transitioning to AI-driven testing is a journey, and we understand the importance of support. TPbenchmark provides comprehensive trainings for the tool itself but most definitely also specialized AI (prompt) trainings, trials, and responsive support channels. We’re here to assist users in every step of the way.

Can you provide examples of successful implementations of TPbenchmark's AI features in real-world scenarios?

We have numerous success stories where TPbenchmark’s AI features have significantly improved testing efficiency, identified performance bottlenecks, and optimized resources. In our own practice, we can save up to 80% of our benchmark prepration time by using the AI Review Assistant. We see in practice that clients come close to this number as well. A trial of our TPbenchmark tool may show you in a short period of time the benefits it can have.

How does AI in TPbenchmark impact the day-to-day tasks of a traditional testing professional?

Great question! AI in TPbenchmark automates repetitive tasks, allowing testing professionals to focus on more strategic and complex aspects of Transfer Pricing. It enhances efficiency, enabling us to achieve more in less time.

How can we trust the suggestions provided by the AI in TPbenchmark, and how is the auditing process facilitated?

Trust is crucial, and our auditing process is designed to ensure transparency. While the AI Review Assitant provides suggestions, a human reviewer is an integral part of the process. We make it easier to review companies by leveraging the potential of AI, but human oversight ensures accuracy and reliability in the final results.

Can you elaborate on the fixed information provided to the AI model and how it prevents speculation or creative interpretations?

Yes. We provide the AI model with fixed information obtained directly from the website of a company. By doing so, we maintain full control over the data, minimizing the risk of speculation or creative interpretations. The model we use is not a black box; it’s a tool trained with specific, verifiable data that undergoes careful scrutiny.

Can you explain the role of a human reviewer in the auditing process and how TPbenchmark ensures the accuracy of AI-generated suggestions?

A human reviewer plays a vital role in the auditing process. While the AI provides suggestions, the reviewer ensures accuracy, relevance, and context. This dual approach, combining AI efficiency with human judgment, is key to producing reliable benchmarking results that stand up to scrutiny.

Do users need to provide the company website URL to get scraped data or does the AI scrape the data simply based on company name?

TPbenchmark uses the company website URL that is retrieved from the database.  However, if, for example, a company’s website is wrong, you can manually overwrite the URL and start the scrape process for this single company. This way, the tool ensures a full audit trail with the correct screenshots and translated text.

Our license with BvD says we are not allowed to upload their data to external tools. It seems that your workflow starts with uploading BvD data. How do you overcome this issue?

We presently have multiple clients that are using TPbenchmark in combination with the BvD database. TPbenchmark is compatible with various database such as CapitalIQ, FAME, BvD, Moodys and others. If you would like further clarification, please feel free to reach out to our team here.

How does AI collect financial data to elaborate interquartile range ?

The AI Review Assistant does not collect financial data. The financial data is retrieved from the database. After the review process, the results are calculated based upon the database import.

How frequently are the website data updated?

The website data is scraped by TPbenchmark the moment we start the benchmark in the tool or if the company website URL is manually overwritten in the tool. These screenshots and text will remain as is. Each time a benchmark is started, TPbenchmark scrapes the website information at that point in time.
Hence, if in 5 years’ time you need to provide screenshots of the companies from the original review, the tool has the “old” screenshots i.e. the screenshots taken at the time of starting the benchmark. This ensures efficient audit-trails.

How are we sure that the AI will not merge data from one client with data from another group?

The AI Review Assistant focusses only on the rejection reasons provided for that particular benchmark. It does not check other reasons for other benchmarks within the tool. It is restricted, it may only come up with suggestions in the particular benchmark that is currently being worked on and cannot interfere with other benchmarks.

If I understand correctly, if the descriptions of companies' profiles obtained from the internet is "garbage", the review made by AI will also be unreliable "garbage"?

If we do not provide the AI Review Assistant with a correct description of what it needs to review, the review itself will be inaccurate. If we prompt to the AI Review Assistant that we need to reject a manufacturing company in a manufacturing benchmark based on function, then of course the results are not reliabe. Hence, we need to ensure accurate prompts for the rules/parameters/rejection reasons of a benchmark.
The information used by AI in our platform can be (a combination of) the website, trade description and/or full overview. You are free to select one or multiple of these sources. As TaxModel policy, we only let AI review the company’s website, as these are the most clear and best describe what the company under review is doing. Our scraper copies the text from the website, then the business analyst (AI) makes a summary of these website pages. The TP analyst (AI) will test the the rejection reasons against the website and will appropriately select a suggestion (either reject or accept). I, as the preparer of the benchmark, need to review the suggestions made by AI. This shows the full preparation/review flow that I can use, if necessary, as an audit trail.

Does the tool save the characteristics (rejection/acceptance reasoning) of a company for future benchmarkings?

The benchmark is saved on the TPbenchmark platform. One of the ways to re-use comments is that you may duplicate a benchmark. The overlapping companies will automatically retrieve the rejection or acceptance reason from the original benchmark. Of course, you may still overwrite the comments and review the new benchmark. If you change a comment in the new benchmark, the original benchmark does not change and will remain in its finished state.
You can now also save your AI Review Assistant prompts within the TPbenchmark tool, allowing you to return to them for further editing. You no longer have to copy your prompt each time you want to try it out, making them ready to be tweaked according to your needs, streamlining your review process significantly.

Missed the live webinar on Benchmarking with AI? Watch it now on-demand.