FAQs: Benchmarking with AI: Challenges and Successes
After our successful interactive transfer pricing webinar : Benchmarking with AI: Challenges and Successes, this blog post answers the questions asked by our attendees that we couldn’t get to during the session.
Are there any collaboration features within TPbenchmark's AI tools, especially when multiple team members are involved in the testing process?
Absolutely! TPbenchmark supports collaboration by providing multiple users access to the system at the same time. Collaboration together on a benchmark with the use of AI makes it fairly easy to now benchmark. The AI tool is designed to enhance teamwork and efficiency.
What kind of support and training is available for users transitioning to AI-driven testing in TPbenchmark?
Transitioning to AI-driven testing is a journey, and we understand the importance of support. TPbenchmark provides comprehensive trainings for the tool itself but most definitely also specialized AI (prompt) trainings, trials, and responsive support channels. We’re here to assist users in every step of the way.
Can you provide examples of successful implementations of TPbenchmark's AI features in real-world scenarios?
We have numerous success stories where TPbenchmark’s AI features have significantly improved testing efficiency, identified performance bottlenecks, and optimized resources. In our own practice, we can save up to 80% of our benchmark prepration time by using the AI Review Assistant. We see in practice that clients come close to this number as well. A trial of our TPbenchmark tool may show you in a short period of time the benefits it can have.
How does AI in TPbenchmark impact the day-to-day tasks of a traditional testing professional?
Great question! AI in TPbenchmark automates repetitive tasks, allowing testing professionals to focus on more strategic and complex aspects of Transfer Pricing. It enhances efficiency, enabling us to achieve more in less time.
How can we trust the suggestions provided by the AI in TPbenchmark, and how is the auditing process facilitated?
Trust is crucial, and our auditing process is designed to ensure transparency. While the AI Review Assitant provides suggestions, a human reviewer is an integral part of the process. We make it easier to review companies by leveraging the potential of AI, but human oversight ensures accuracy and reliability in the final results.
Can you elaborate on the fixed information provided to the AI model and how it prevents speculation or creative interpretations?
Yes. We provide the AI model with fixed information obtained directly from the website of a company. By doing so, we maintain full control over the data, minimizing the risk of speculation or creative interpretations. The model we use is not a black box; it’s a tool trained with specific, verifiable data that undergoes careful scrutiny.
Can you explain the role of a human reviewer in the auditing process and how TPbenchmark ensures the accuracy of AI-generated suggestions?
A human reviewer plays a vital role in the auditing process. While the AI provides suggestions, the reviewer ensures accuracy, relevance, and context. This dual approach, combining AI efficiency with human judgment, is key to producing reliable benchmarking results that stand up to scrutiny.
Do users need to provide the company website URL to get scraped data or does the AI scrape the data simply based on company name?
TPbenchmark uses the company website URL that is retrieved from the database. However, if, for example, a company’s website is wrong, you can manually overwrite the URL and start the scrape process for this single company. This way, the tool ensures a full audit trail with the correct screenshots and translated text.
Our license with BvD says we are not allowed to upload their data to external tools. It seems that your workflow starts with uploading BvD data. How do you overcome this issue?
We presently have multiple clients that are using TPbenchmark in combination with the BvD database. TPbenchmark is compatible with various database such as CapitalIQ, FAME, BvD, Moodys and others. If you would like further clarification, please feel free to reach out to our team here.
How does AI collect financial data to elaborate interquartile range ?
The AI Review Assistant does not collect financial data. The financial data is retrieved from the database. After the review process, the results are calculated based upon the database import.
How frequently are the website data updated?
How are we sure that the AI will not merge data from one client with data from another group?
If I understand correctly, if the descriptions of companies' profiles obtained from the internet is "garbage", the review made by AI will also be unreliable "garbage"?
Does the tool save the characteristics (rejection/acceptance reasoning) of a company for future benchmarkings?
Missed the live webinar on Benchmarking with AI? Watch it now on-demand.