Public app: Search through the documentation and community forum using AI

Searching for what you are looking for within VIKTOR can be a challenge. Therefore, we now introduce the VIKTOR search app! This app makes use of the same LLM model as ChatGPT and can answer your questions based on the context, instead of just keywords. In case no answer is found, you can always check the most relevant links, since all the sources are provided as well!

In the background, the app works in a similar way as our document searcher app. First, embeddings are created of all the VIKTOR documentation and community forum. When a question is asked, the question is also embedded. Then, using cosine similarity, the closest context is found, based on the question. Finally, the answer is created related to the provided question and context. This LLM technique is also referred to as retrieval augmented generation (RAG).

What do you think of the app? Does the app answer your questions as expected? For any questions about the app, please let me know!

4 Likes

Hey, Jelle

I’m facing some issues here: I can’t use the search app!

Whenever I click the hyperlink “VIKTOR search app”, it redirects me to a login page but the login always fails. The same happens to other random apps, such as the Warehouse Configurator app.

I’m positive that my credentials are correct, and a friend of mine is having this problem too (with the same apps), so it is not only me!

I would be glad if you could shed some light :bulb:

Hi Leonardo,

Welcome to the forum, thanks for posting!

Good catch, apparently the wrong link was in the original post. I’ve adjusted it and the link should work now without having to log in.

Hope you enjoy!

It’s working!

Thanks, Daniel :raised_hands:

1 Like

This sounds like a great solution for tackling the challenge of searching through large documentation or community forums. Using embeddings and cosine similarity to match context with a user’s question seems like an efficient approach to ensure more relevant answers are provided, rather than relying purely on keyword matching.

The idea of integrating retrieval augmented generation (RAG) is particularly interesting because it should make the answers more contextually accurate. It’s a smart way of leveraging the LLM to not just generate answers, but to also refine those answers based on the surrounding context, making the tool more adaptive.

I’m curious how well the app handles more complex or nuanced questions, where the context might be a little more intricate. Does the app provide any insights or explanation about why certain answers are generated, or is it purely a “here’s the best match” sort of response?

Overall, the concept sounds solid, and I imagine it will save a lot of time for users trying to find specific info quickly without having to comb through pages of documentation manually. Looking forward to seeing how it evolves!