Category: News

  • NEW FEATURES IN KOSMO

    NEW FEATURES IN KOSMO

    Some of you have eagerly anticipated this: KOSMO has gained new features with the latest release. Today, we will reveal what these entail.

    PDF Viewer

    When KOSMO generates a response, the sources used are always provided. If the source was a website, you could previously open it with a simple click. This functionality now extends to PDF files that you have provided to KOSMO. With a single click, the file opens in the PDF viewer. The text passages KOSMO utilized for the answer are highlighted. Additionally, you can directly print or download the file. This eliminates the lengthy search for documents!

    Scheduled tasks

    Do you regularly submit the same requests to KOSMO? Then we have the perfect solution for you: scheduled tasks. From now on, you can define what KOSMO should do for you, when, and how often. From weather reports to the latest posts from your favorite website – KOSMO summarizes your updates in a separate chat, ensuring you always stay informed.

    Push notifications

    The latest information is, of course, also available directly on your smartphone. KOSMO notifies you when scheduled tasks have been completed. This ensures you never miss any important information.

    E-Mail connection (beta)

    Currently in beta, but soon fully functional: the integration of your email inbox. Simply store your access credentials, and you can ask KOSMO about the content of your emails. This transforms your emails into a valuable source of information. This feature is already available for IMAP. Gmail users will need to exercise a little more patience.

    By the way: The email feature, as well as the familiar functions “Nextcloud”, “File Storage”, “Save Websites”, and “Standard Instructions”, can now be found under the menu item “External Resources”.

    And a small preview: The next release is already in the pipeline and is scheduled for late October. Among other things, it will include summaries at the push of a button – you can look forward to it!

  • OUR JULY SPLITBLOG: WHEN CHATBOTS BECOME POLITICAL

    OUR JULY SPLITBLOG: WHEN CHATBOTS BECOME POLITICAL

    The July Splitblog – When Chatbots Become Political

    This month, we highlight why it is important to question the origin of chatbots and AI models and to remain critical when interacting with them. The suggestion for this topic was provided by Mats from our backend team.

    Grok 4 has impressively demonstrated in recent weeks how the programming of an AI assistant or chatbot can influence its response behavior. Unrestrained, Grok generated antisemitic and racist statements that made headlines. The company xAI has since apologized, stating that Grok was programmed to respond “honestly” and “not be afraid to shock politically correct people”. Regarding the latter instruction, the goal has certainly been achieved. And even under the premise that bad press is good press, Grok has certainly served its purpose. In any case, the headlines are reason enough to seriously examine the various manufacturers and providers of chatbots and AI assistants. Regardless of the area in which the systems are to be used, a thorough review and extensive testing beforehand are urgently necessary. Especially if companies allow themselves to be represented by chatbots in their public image, serious damage to their reputation can otherwise result.

    But how can AI assistants be led to make such statements? The basis of all language models is training data of varying scope and origin. In other words, vast amounts of information are available for generating responses. How and in what way answers are to be generated from this is a question of programming or individual settings. For example, it can be determined that certain information sources should be used preferentially, or that the generated answers should be particularly humorous, scientific, long, or short. In Grok’s case, according to data scientist Jeremy Howard, there are also indications that the chatbot often represents the opinions and statements of xAI owner Elon Musk on controversial topics. However, according to programmer Simon Willison, this could be attributed to Musk’s prominent role.

    Similar trends to those currently seen with Grok can also be observed with other chatbots. DeepSeek also does not answer a number of political questions neutrally. In some cases, the generated answers are deleted shortly after creation and replaced with a “Let’s talk about something else”. Apparently, the bot’s answers are at least somewhat more neutral when using the English version than in the Chinese version. Extensive experiments with DeepSeek reveal a programmed “self-censorship”.

    In Europe, it is not uncommon to equip chatbots with certain ethical standards before they are unleashed upon humanity. For example, our chatbot KOSMO, which is based on a language model from Mixtral, responds politely evasively when it comes to violence and crime. While this behavior is desirable, we believe that objectivity in the presentation of facts should always be ensured. The integrated source verification contributes to this, giving users the opportunity to check and evaluate the sources used.

    A certain bias in language models can never be completely ruled out. A chatbot’s knowledge is only as extensive as its training data and additional information, and its response behavior is often also influenced by user feedback during finetuning. Users themselves can also significantly influence the response behavior through the prompts entered (unconsciously).

    In addition to other factors, the origin of the language model used should therefore also be thoroughly examined before relying too heavily on the correctness of the answers.

  • OUR SPLIT BLOG IN JUNE: is AI Changing Academic Exams?

    OUR SPLIT BLOG IN JUNE: is AI Changing Academic Exams?

    This month, we look into the future and address the question of how AI will impact examinations at universities and schools. This topic suggestion comes from our working student Vincent, who is currently completing an exchange semester in Sweden.

    Reports of AI-generated work by pupils and students are becoming more frequent. Increasingly, the question is being discussed in the media how educational institutions are supposed to identify which texts were actually created by humans. Despite some indications, such as specific phrasings, writing styles, and above-average flawlessness, it is already difficult to determine beyond doubt whether a particular text truly originates from a human. With increasingly improving language models and prompting methods (e.g., “Formulate as humanly as possible and include errors”), unambiguous detection will become progressively more difficult. This is a major problem, considering that a large part of academic education relies on the creation of texts. Be it for applications, examinations, master’s theses, or term papers – examiners everywhere rely on text-based methods. However, there is a high risk that these examination methods will no longer function reliably in the long term. Detector software, which promises to identify artificially generated texts, can provide clues, but is not reliable enough itself and can often be circumvented with simple means. And particularly alarming: Texts written by non-native speakers are often falsely identified as AI-generated by these programs. The risk of discrimination in selection processes can thereby increase significantly. Especially since it is not only difficult to prove that a text was created by AI, but also that the opposite is true.

    But how can universities and other educational institutions address this? Oral examination procedures could in most cases clearly show whether someone has truly thought for themselves and understood. However, oral examinations are associated with enormous time and personnel expenditure and cannot easily assess the same scope of knowledge as written examinations.

    The majority of educational institutions currently still rely on a straightforward prohibition. However, some institutions are already exploring new approaches. Till Krause from the University of Landshut, for example, allows students to actively use AI as a source – as long as this is clearly indicated. Thus, a precise indication of the language model used and the prompt applied is required. Because despite all the challenges that the use of AI brings to educational institutions, AI offers one thing above all: an incredibly vast wealth of information that can be excellently used for learning and provides a fantastic basis for the development of one’s own ideas and thoughts.

    At the University of Economics in Prague, too, a pragmatic approach is taken to the use of AI. The local Business Administration program will no longer require a traditional bachelor’s thesis starting in autumn 2024. Instead, there will be project work, the results of which will be evaluated. Many consider this approach more sensible and practical than the previous assignments. This is a thoroughly sensible approach, especially for study programs where the primary focus is not on flawless and artful writing. Perhaps this even presents an opportunity to highlight the talents of individuals who, for example, have a spelling disability.

    The fact is, academic examination procedures will have to be changed. AI – similar to other technological means – has already entered the daily lives of pupils and students. Now, methods are needed to assess human knowledge in other ways.

    An absolutely worthwhile podcast on this topic is available here: https://www.ardaudiothek.de/episode/11km-der-tagesschau-podcast/ki-or-not-ki-koennen-wir-ihre-texte-noch-enttarnen/tagesschau/13779441/

  • Here Comes EVA

    Here Comes EVA

    We have exciting news! Since the beginning of the year, we have been working on a new project. Together with four associations from Schleswig-Holstein, we have launched the project “Chatbot meets administration: Intelligent dialogue systems as a future solution for non-profit organizations.”

    Non-profit organizations often face as many bureaucratic hurdles as companies. They are confronted with administrative tasks that take up a large part of the available time. At the same time, these organizations live from the commitment of volunteers and rarely have the resources for expensive software. Chatbots can provide a remedy here and facilitate association work. Be it in the provision of internal association information, in the formulation of cover letters or in the onboarding of new members. Existing knowledge no longer has to be painstakingly researched manually, but can be provided in natural language.

    However, the use of chatbots requires consideration of the AI ​​Regulation and all relevant data protection regulations. Here, too, some obstacles can arise, especially for people who are not experts in the field. With our project idea, we offer non-profit organizations complete data sovereignty and guarantee compliance with all relevant guidelines.

    With our idea, we approached the Civic Innovation Platform program and were able to convince them. Our project will be funded by the Federal Ministry of Labor and Social Affairs until the end of 2026.

    Together with non-profit organizations from a wide range of areas, we are developing an open source solution that will also be available to other associations in the future. With EVA (short for electronic administration assistance), a chat system is being created that is specially tailored to the needs of non-profit associations and organizations. It will be possible to host EVA yourself and thus operate independently of providers and save costs. We receive active support from the Kinderschutzbund Ostholstein, Ostsee-Holstein-Tourismus, the Landesverband der Kleingartenvereine Schleswig-Holsteins and InMotion. Thanks to the diverse fields of activity of these associations, we can take into account the most diverse requirements and needs in the development of EVA and jointly develop a dialogue system that is efficient and tailor-made.

    You can find more information at: https://www.civic-innovation.de/projektfoerderung/foerderprogramme/foerderrunde-2025-26#page-2254

  • OUR MAY SPLITBLOG: HOW AI IS REVOLUTIONIZING CHEMISTRY

    OUR MAY SPLITBLOG: HOW AI IS REVOLUTIONIZING CHEMISTRY

    In May, we are addressing a topic suggested by our ‘Team Lead Development’ Bartosz and the question of how AI can transform chemistry.

    In our daily lives, artificial intelligence is already an integral part of many areas. We work with language models and chatbots and use smart home devices. However, we are often not truly aware of the unforeseen possibilities that arise from the use of AI. Our current topic shows that the applications of AI, however, extend far beyond ‘nice-to-have’ gadgets.

    Those interested in the Nobel Prize awards may have noticed. The 2024 Nobel Prize in Chemistry went half to David Baker and the other half to Demis Hassabis and John Jumper. The work of the three focuses on the design of proteins, as well as the prediction of their structures – with the help of artificial intelligence.

    Proteins are biological macromolecules composed of amino acids. These, in turn, are linked by peptide bonds. Proteins are present in every cell and perform various vital functions in the human body. They serve as hormones, signaling molecules, antibodies, and tissue building blocks.

    The structure of proteins can be divided into several levels.

    Primary Structure: This is the simple sequence of amino acids in the chain. Each amino acid has a specific position, and this sequence is crucial for the protein’s function.

    Secondary Structure: Here, the amino acid chains fold into regular patterns, such as spirals (helices) or folded sheets (beta-sheets). These structures help keep the protein stable.

    Tertiary Structure: The entire amino acid chain further folds into a complex three-dimensional shape. This shape is crucial for the protein’s function, as it determines how the protein interacts with other molecules.

    Quaternary Structure: Some proteins consist of multiple subunits that together form a larger structure. These subunits can assemble into a functional complex.

    Hassabis and Jumper succeeded in developing a model for predicting protein structures called “AlphaFold2”. Both work for Google subsidiary DeepMind. With this model, it is possible to accurately predict the three-dimensional structure of a protein based on its amino acid sequence. Previously, this was only possible with the complex method of X-ray crystallography. With its help, the double helix structure of DNA was deciphered in 1953. However, to perform X-ray crystallography, the protein in question must first be available in crystalline form. Thanks to “AlphaFold2”, the three-dimensional shape of a protein can now be calculated without the need for laboratory analysis.

    Perhaps the most remarkable part of the work is the development of an entirely new type of protein. Something that was previously considered almost impossible. Using the “Rosetta” software, developed under Baker’s leadership back in the 90s, and with the help of “AlphaFold2”, entirely new proteins can now be constructed for the first time. It thus becomes possible to create proteins with specific functions. These could be used, for example, for the targeted development of drugs or vaccines. The creation of new nanomaterials is also conceivable.

    AI has thus opened up entirely new possibilities for protein research, which can significantly influence our future.

  • Smart Funding with the Research Allowance

    Smart Funding with the Research Allowance

    Do you, like us, work on an innovative AI project? Then you have the chance to have the costs for wages, salaries and development partially reimbursed by the tax office. As part of the research allowance, up to 35% of the costs will be reimbursed retroactively.

    As part of the application process, of course, close attention is paid. What technical risks are there or were there, and what exactly is the innovative core of the project? The support of WTSH, the management consultant Roman Charkoi (Pollecon) and the AI network Schleswig-Holstein (KI.SH) was a great help in the application process for us.

    In contrast to other funding opportunities, the application for the research allowance involves comparatively little effort and is also easy to manage for inexperienced startups. The research allowance is a practical way to promote innovative projects in a targeted manner.

    For us at Splitbot, the research allowance is a complete success. We were able to convince the expert jury and, thanks to the funding, we can further advance our project.

    Read more about it here: https://kuenstliche-intelligenz.sh/de/forschungszulage-splitbot

  • OUR SPLIT BLOG IN APRIL: DeepSeek-R1

    OUR SPLIT BLOG IN APRIL: DeepSeek-R1

    As always, we addressed topic suggestions from our team at this point. This month’s topic request comes from our Managing Director Caro and deals with the current hype surrounding DeepSeek-R1.

    When dealing with the Chinese language model DeepSeek-R1, one property in particular immediately catches the eye: the immense price advantage compared to other models. And that with competitive performance. This competitive price is made possible by an extremely efficient training method and model architecture that was applied to the model. DeepSeek-R1 has caused a real hype and is presented in the media almost as a quantum leap. But what is actually behind it?

    Much ado about nothing?

    In fact, DeepSeek-R1 is not a new technology, but merely an impressive combination of already known technologies, such as the Mixture of Experts approach.

    This innovative combination enables the model to be operated on comparatively inexpensive hardware. And: DeepSeek-R1 has perfected reinforcement learning. In simple terms, the results produced by DeepSeek-R1 are transferred to the next model version, thus achieving an enormous improvement in the “intelligence” of that next generation. OpenAI, for example, currently requires much more effort to significantly improve its own models. Which is why the company is coming under increasing pressure from DeepSeek-R1 and the publication of the associated technical details.

    With all the excitement, it should not be forgotten that there have been almost weekly innovative developments in the field of language models in recent years – from companies, universities or state-driven actors. One example of this is the Teuken7B model. Neither the open source approach nor the basis of DeepSeek-R1 are really novel.

    Rather, the development of DeepSeek-R1 is a reaction to the sanctions of the USA – China lacks powerful hardware.

    DeepSeek-R1 is by no means the first open source language model to have made it to the top of the comparison table for a short period of time.

    DeepSeek-R1 can be used by anyone – even without operating it on their own hardware. However, as with other models, the data entered is passed on to the manufacturer. In addition, one should be aware that the stored knowledge is partly politically colored. A circumstance, however, that has also been observed in part with other models. So DeepSeek-R1 has above all achieved one thing: finding a cost-effective and highly efficient way to train and improve complex language models. And it has shown this way to the public. It can therefore be assumed that the development in this area will pick up even more speed due to the publication of DeepSeek-R1. However, it will probably not be enough for a lasting advantage over the competition.

  • The Splitblog in March: Precise Prompting

    The Splitblog in March: Precise Prompting

    This month, we are addressing the topic requested by our colleague Heinke and learning everything one needs to know about prompting.

    The hype surrounding chatbots has not subsided even two and a half years after the release of ChatGPT, and by now, almost everyone has experimented with a chatbot. While chatbots are already being used in many companies, others have not yet recognized their benefits or were dissatisfied with their test results. But what could be the reason for this? One cause for unsatisfactory answers can be insufficient prompts. To ensure you achieve the best possible results in the future, we have compiled an overview of what to consider when prompting.

    What is Prompting?

    A prompt is the input given to the language model or chatbot to obtain a specific answer or a certain behavior. Prompting has evolved into a veritable sport for many people, serving not only to achieve the best possible results but also capable of being misused. For instance, there are individuals who deliberately try to obtain erroneous results through prompting, for example, to harm the providing companies.

    Why is Prompting Important?

    Prompting is important because it influences the quality and relevance of a language model’s responses. A well-formulated prompt can help the model deliver precisely the information needed while avoiding unnecessary or irrelevant information. This saves time and improves efficiency when using language models.

    Tips for Effective Prompting

    1. Be clear and precise:
      A good prompt should be clear and precise. Avoid vague or ambiguous phrasing. Provide the model with as much context as possible to obtain the desired answer.
    2. Use specific questions:
      Ask specific questions instead of making general requests. For example: “What steps are necessary to start a new project?” instead of “Tell me about project management.”
    3. Use examples:
      Provide the model with examples to clarify the desired answer. For example: “Create an email thanking a customer for their order. Here is an example: ‘Thank you for your order. We look forward to delivering your goods soon.’”
    4. Use instructions:
      Give the model clear instructions on what it should do. For example: “Write a blog post about the benefits of artificial intelligence. The post should be at least 500 words long and written in an informative tone.”
    5. Avoid ambiguities:
      Avoid ambiguities and unnecessary information. A clear and precise prompt helps the model deliver the desired answer faster and more accurately.

    Examples of Effective Prompting

    1. Answering Questions:
      • Prompt: “Explain the basics of artificial intelligence.”
      • Answer: “Artificial Intelligence (AI) is a field of computer science concerned with the development of systems capable of performing tasks that normally require human intelligence.”
    2. Creating Texts:
      • Prompt: “Write a blog post about the benefits of artificial intelligence. The post should be at least 500 words long and written in an informative tone.”
      • Answer: “Artificial Intelligence (AI) has made enormous progress in recent years and offers a variety of benefits for various industries. From automating routine tasks to improving decision-making – AI has the potential to significantly increase efficiency and productivity in many areas…”
    3. Translations:
      • Prompt: “Translate the following text into German: ‘The quick brown fox jumps over the lazy dog.’”
      • Answer: “Der schnelle braune Fuchs springt über den faulen Hund.”

    Conclusion

    Prompting is a powerful tool that helps you get the best results from language models. With the right techniques and tips, you can ensure that your prompts are clear, precise, and effective. Utilize these strategies to improve the accuracy and relevance of responses and make your work more efficient.

  • OUR SPLIT BLOG IN FEBRUARY: DEEPFAKES AND HOW TO RECOGNIZE THEM

    OUR SPLIT BLOG IN FEBRUARY: DEEPFAKES AND HOW TO RECOGNIZE THEM

    In our “Splitblog” section, we are known to take up topic suggestions from our team and often deal with them critically – even when it comes to artificial intelligence. This month I, Katha, was allowed to choose a topic myself…

    If you look to the right and left these days, or even just at the smartphone in your own hand, you will more and more often feel fear and anxiety. Reports about wars, crises, inner-German and worldwide politics, attacks and other threats have become an indispensable part of the daily news. Populism in all shapes and colors influences us more than we are often aware of (our recommendation: https://www.zdf.de/show/mai-think-x-die-show/maithink-x-folge-31-populismus-100.html ). While it has been difficult enough in recent years to recognize what is fact and what is cleverly placed fiction, another challenge is now increasingly being added: Deepfakes.

    WHAT ARE DEEPFAKES ACTUALLY?

    Deepfakes are false reports created using artificial intelligence. These can be simple texts and articles, but also photos, audio files or videos. While (successful) image manipulation in particular used to require a great deal of expertise, it is becoming increasingly easy to generate believable fakes with the massive number of freely and freely available AI tools. Deepfakes are used specifically to spread false information – for various reasons and from various camps.

    HOW DO I RECOGNIZE DEEPFAKES?

    It gets really exciting when it comes to the question of how to protect yourself from falling for deepfakes. This is not so easy due to the constant, rapid improvement of the technology. If you want to test your ability to distinguish between man and machine, you can do so here, for example: https://www.humanornot.ai/. There are various, also AI-based tools that promise to unmask AI-generated content. Unfortunately, none of them work really reliably to date. So what else can you do?

    CHECK FACTS:

    Regardless of whether it is text, (moving) image or sound – try to assess as neutrally as possible whether the statements contained can be true and are logically conclusive. If an independent assessment is not possible, it is worth looking for further reports on the topic. It often helps to look at the alleged facts from different angles. Helpful here are, for example, www.mimikama.org, www.correctiv.org or other fact-checking portals.

    CHECK SOURCES:

    Another important indication of reliability is the origin of the respective message. Who is spreading this information? Is it a reputable media portal or is the origin unknown? (Did you know: Our chatbot KOSMO provides the sources used with every generated answer).

    FIND EVIDENCE:

    As with any investigation, the following also applies here: Is there evidence for the present message or the contained claims and theses? Could the person shown or quoted have been on site at all?

    LOOK CLOSELY:

    Especially with photos and videos, you should take a close look. At least at the moment, AI-generated images and videos are often not perfect. There are superfluous fingers, unrealistic teeth, inappropriate details, such as jewelry that appears out of nowhere. Especially with videos, the lip movements usually do not match the soundtrack or the facial expressions seem strikingly unnatural. The image background can also be very revealing. Does the perspective fit? Could the picture have been taken from this point of view? If the present image or video passes the first assessment, it is often still worth doing a reverse search – Google Lens, for example, offers the possibility of using images from the smartphone for the Internet search. Alternatively, the URL of the image can be entered into any search engine. Often you come across the original photo that was used to create a fake video. If the message is about a big event, you can assume that you will find more pictures – after all, almost everyone has a smartphone with a camera these days.

    OUR CONCLUSION:

    We will all be confronted more and more often with increasingly credible deepfakes in the future. It is all the more important to prepare for this and to know how to recognize deepfakes.

    MORE ON THE TOPIC:

    Further information on the topic can be found, for example, in the #Faktenfuchs section of the BR or at klicksafe.de. The zdf has also dedicated an episode of the logo! series to the topic and treated the topic in a way that is suitable for children and young people.

  • Our Splitblog in January: the Chatbot as a Search Engine

    Our Splitblog in January: the Chatbot as a Search Engine

    At the beginning of the year, we are dedicating ourselves to a topic suggested by our trainee Artur and exploring the question of whether chatbots could replace conventional search engines in the future.

    Most of us use them daily: search engines. Whether Google, Bing, Yahoo, or one of the countless other search engines, they have become indispensable in our daily lives. Does your son want to know which was the fastest dinosaur? Do you need a recipe for the next family celebration? And what’s the weather like at the holiday resort? The well-known quote from Bill Gates, “Information at your fingertips,” has become a comfortable reality for all of us. We click through links and websites and find the information that is most useful to us in the shortest possible time.

    But what about our private data or our work materials? When it comes to the details of your own mobile phone contract or the company’s internal guidelines, search engines are only of limited help.

    Chatbots can take our information procurement to the next level. There are already ways to link search engines and language models. The answer to a question is then no longer just a list of links, but a fully formulated answer in natural language and with a reference to the source. But that’s not all. Various other systems can be connected to chatbots via API, thus enabling access to the information they contain. For companies, this means that employees can quickly and easily access all relevant information or files with the help of chatbots. No more long searches in various filing and information systems are necessary.

    We are certain that this progress will also find its way into the private sector in the near future. Then the respective chatbot may not only be able to announce the weather at the holiday resort, but also knows the details of the booking and, if necessary, the conditions of your travel health insurance. Anyone who has ever tried to switch to the best possible mobile phone contract knows that the comparison alone can take up several hours of lifetime. How pleasant would it be if the chatbot on the smartphone not only finds the optimal provider, but also knows how long your notice period is and directly pre-formulates the corresponding termination?

    But beware. Not all obstacles have been removed yet. All currently available language models tend to hallucinate. Especially when the information you are looking for is not available. This phenomenon is currently occupying researchers and developers worldwide. At Splitbot, we increase the reliability and verifiability of the answers provided by specifying the sources used with each answer.

    Thanks to the rapid development of chatbots and language models, the end of hallucinations is within reach and one can be curious to see in which areas of life chatbots will soon be introduced.