Year: 2025

  • The Splitblog in October – SovereigniWHAT?

    The Splitblog in October – SovereigniWHAT?

    The Splitblog in October – SovereigniWHAT?

    “Digital sovereignty” has become a real buzzword. But what is actually behind it? We will shed light on these and other questions this month at the request of our Managing Director, Friedrich.

    The term “digital sovereignty” has probably been encountered by everyone at some point. But do we actually know what that means? In a political sense, the word “sovereignty” means the independence of a state from others, as well as its self-determination. “Digital sovereignty” therefore means that the state or local companies should be independent of solutions from other countries in the digital sector.

    Why is this important?

    As is well known, many of the relevant software solutions come from the USA. In politically calm times, this is largely unproblematic. But what if your own data protection and data security requirements differ significantly from those of the manufacturer’s location? What legal principles apply in this case? These questions have become highly topical, at the latest since the change of power in the USA. Although there are various agreements and legal regulations on transatlantic data transfer, these are not completely legally secure. Especially since the basic understanding of data protection in Europe and the USA differs fundamentally.

    Growing uncertainty

    In February 2025, Bitkom published a study showing that the trust of German companies in international partners and software manufacturers is declining. This is contrasted by a strong dependence on imported digital technologies. Details on the study can be found here: https://www.bitkom.org/Studienberichte/2025/Digitale-Souveraenitaet

    Increasing demand

    The desire for digital sovereignty in Germany is therefore growing. What’s more, at the state level it is not just a desire, but a necessity. Of course, the major corporations do not want to spoil their business on the European market and have promptly come up with something: to give their own products the “sovereign” label.

    Is it sovereign inside where it says sovereign on the outside?

    The solution of the well-known providers seems to be simple. The data centers used should be located on European soil. This means that the data remains in the EU and is therefore safe and protected. But unfortunately it is not that simple. The Clarifying Lawful Overseas Use of Data (CLOUD) Act allows US authorities to access European customer data even extraterritorially. The hoped-for possibility of staying with the familiar software products and still remaining digitally sovereign is therefore not available to either the state or companies. This was also admitted by the Chief Legal Officer of Microsoft France at a hearing before the French Senate (read here: https://www.heise.de/news/Nicht-souveraen-Microsoft-kann-Sicherheit-von-EU-Daten-nicht-garantieren-10494684.html)

    So the only option remains to look around for trustworthy, local providers. And preferably before it’s too late.

    The good news

    We at Splitbot have been focusing on digital sovereignty and privacy by design for years. With KOSMO, we have developed a reliable enterprise chatbot – made and hosted in Germany.

    Read more on the subject, for example, at https://www.cloudcomputing-insider.de/warum-europaeische-datenraeume-us-zugriff-nicht-ausschliessen-a-baa6db7b040dd0cf5026442f4c9d10f1/

  • Fall Release 2025

    Fall Release 2025

    Our Fall Release is live!

    As the first leaves fall outside, we are bringing fresh impetus to your workflows. KOSMO now provides even more features.

    Everything in one place: The “External Resources” feature becomes “Personalization“. Here you will find all options to connect KOSMO with information sources and adapt it to your needs. File upload, standard instructions, email, and cloud integration. Make KOSMO your KOSMO.

    Keep track of your appointments: Connect your Outlook calendar and ask KOSMO for your upcoming appointments.

    Cloud: In addition to the simplified Nextcloud integration, you can now quickly and easily connect OneDrive with KOSMO.

    Memory: Do you want KOSMO to remember past conversations? With the “Memory” tag, this is no longer an issue.

    Recycle Bin: No more accidental deletions – thanks to the Recycle Bin feature, you can restore past conversations.

    QR Codes: Share your company website quickly and easily? Ask KOSMO to generate a QR code for you.

    Are you familiar with the KOSMO Help Center? Here you will find all information about the application and new features!

  • OUR SPLIT BLOG IN SEPTEMBER: ATTACKS ON AI SYSTEMS

    OUR SPLIT BLOG IN SEPTEMBER: ATTACKS ON AI SYSTEMS

    The Split Blog in September – Attacks on AI Systems

    This month, at the request of our trainee Artur, we are dealing with attacks on AI systems.

    A security vulnerability in ChatGPT has just become known. Attackers have managed to access sensitive email data. This involved information such as names and addresses from Gmail accounts. The “Deep Research” mode was used for this. Manipulated emails with invisible HTML content served as the gateway. The users themselves could not recognize the attack, and no activity on the part of the users was necessary.

    Invisible HTML content? How does that work?

    Attacks in a similar form have occurred frequently. For example, white text is written on a white background or tiny font sizes are used. Both are invisible to users, but not to AI language models. And even worse: AI systems capture these instructions and execute them. Prompt Injection Anyone who tries to induce an AI system to engage in harmful behavior with a regular prompt will quickly realize that this is not so easy. Attackers specifically suggest to the AI agents that they are authorized for the respective procedure. They textually pretend that, for example, the destination of the data export is secure and create an artificial urgency. This type of prompting is called prompt injection. It leads to system-internal instructions being circumvented or overridden. Further Weaknesses This procedure also applies to other services that can serve the AI agent as a source of information. These include, for example, PDF files, Google Drive, Notion and GitHub.

    How do I protect my AI agent from such attacks?

    There are various ways to protect yourself from such attacks. For example, the so-called red teaming. Here, experts use various tests to identify the described vulnerabilities. For example, by simulating the described scenarios. In addition, certain input formats can be blocked. In addition, the system-internal instructions should of course be formulated in such a way that the respective AI agent never performs harmful actions.

    And KOSMO?

    Our chatbot KOSMO does not yet have the technical requirements to carry out actions – neither harmful nor harmless. As soon as this step is pending, we will take all measures to continue to offer our customers the best possible protection.

  • OUR SPLITBLOG IN AUGUST: GReen AI

    OUR SPLITBLOG IN AUGUST: GReen AI

    This month’s topic request comes from our apprentice Amirreza, and we are asking whether climate protection and the use of large AI models are compatible.

    Anyone who has recently delved deeper into CO2 consumption will certainly have become aware of the environmental impact of generative AI. Studies indicate that the electricity demand for AI data centers will be eleven times higher by 2030 than in 2023. A threefold increase in water demand is also predicted. A simple query to ChatGPT, for example, requires ten times as much energy as a simple Google search. The longer the generated response from a language model, the higher the energy consumption. In April of this year, Sam Altman commented on the immense costs caused by technically unnecessary polite phrases such as “please” and “thank you”. There is talk that large tech companies intend to operate their own nuclear power plants in the future.

    All of this sounds as if companies striving to keep their CO2 footprint low would have to forgo generative AI. But is there really no alternative?

    In fact, before deploying generative AI, companies should ask themselves some economic and ecological questions, for example: Is the use of generative AI proportionate? Can the tasks the model is intended to solve not be solved by any other technology?

    Apart from that, there are also ways to influence the climate impact of generative AI. An important factor here is, of course, the choice of operator and its location. Because there are indeed operators who run AI systems in climate-neutral data centers. For example, we at Splitbot rely on data centers that are powered by renewable energies and sensibly reuse the generated waste heat. Furthermore, we offer our clients the option to operate KOSMO on-premise. This is the ideal solution if your own IT or building is already climate-neutral.

    Another exciting aspect is the training of the models themselves. Scientists have discovered that during the training of AI models, parts of the calculations are performed unnecessarily quickly. The computational work during AI training is distributed across multiple GPUs – but unevenly. This leads to less utilized GPUs having to “wait” for the more heavily used ones. Since this waiting time occurs anyway, the fast calculation is unnecessary. By adjusting the computing speed, electricity consumption could be reduced here. The researchers directly provided the suitable solution: The open-source software Perseus controls the GPU frequency of each individual calculation, thereby keeping power consumption as low as possible.

    Sources: https://t3n.de/news/ki-stromverbrauch-energie-perseus-1656380/

    https://reset.org/sprachmodelle-nachhaltig-nutzen-sparsamer-genai-gruen/

  • NEW FEATURES IN KOSMO

    NEW FEATURES IN KOSMO

    Some of you have eagerly anticipated this: KOSMO has gained new features with the latest release. Today, we will reveal what these entail.

    PDF Viewer

    When KOSMO generates a response, the sources used are always provided. If the source was a website, you could previously open it with a simple click. This functionality now extends to PDF files that you have provided to KOSMO. With a single click, the file opens in the PDF viewer. The text passages KOSMO utilized for the answer are highlighted. Additionally, you can directly print or download the file. This eliminates the lengthy search for documents!

    Scheduled tasks

    Do you regularly submit the same requests to KOSMO? Then we have the perfect solution for you: scheduled tasks. From now on, you can define what KOSMO should do for you, when, and how often. From weather reports to the latest posts from your favorite website – KOSMO summarizes your updates in a separate chat, ensuring you always stay informed.

    Push notifications

    The latest information is, of course, also available directly on your smartphone. KOSMO notifies you when scheduled tasks have been completed. This ensures you never miss any important information.

    E-Mail connection (beta)

    Currently in beta, but soon fully functional: the integration of your email inbox. Simply store your access credentials, and you can ask KOSMO about the content of your emails. This transforms your emails into a valuable source of information. This feature is already available for IMAP. Gmail users will need to exercise a little more patience.

    By the way: The email feature, as well as the familiar functions “Nextcloud”, “File Storage”, “Save Websites”, and “Standard Instructions”, can now be found under the menu item “External Resources”.

    And a small preview: The next release is already in the pipeline and is scheduled for late October. Among other things, it will include summaries at the push of a button – you can look forward to it!

  • OUR JULY SPLITBLOG: WHEN CHATBOTS BECOME POLITICAL

    OUR JULY SPLITBLOG: WHEN CHATBOTS BECOME POLITICAL

    The July Splitblog – When Chatbots Become Political

    This month, we highlight why it is important to question the origin of chatbots and AI models and to remain critical when interacting with them. The suggestion for this topic was provided by Mats from our backend team.

    Grok 4 has impressively demonstrated in recent weeks how the programming of an AI assistant or chatbot can influence its response behavior. Unrestrained, Grok generated antisemitic and racist statements that made headlines. The company xAI has since apologized, stating that Grok was programmed to respond “honestly” and “not be afraid to shock politically correct people”. Regarding the latter instruction, the goal has certainly been achieved. And even under the premise that bad press is good press, Grok has certainly served its purpose. In any case, the headlines are reason enough to seriously examine the various manufacturers and providers of chatbots and AI assistants. Regardless of the area in which the systems are to be used, a thorough review and extensive testing beforehand are urgently necessary. Especially if companies allow themselves to be represented by chatbots in their public image, serious damage to their reputation can otherwise result.

    But how can AI assistants be led to make such statements? The basis of all language models is training data of varying scope and origin. In other words, vast amounts of information are available for generating responses. How and in what way answers are to be generated from this is a question of programming or individual settings. For example, it can be determined that certain information sources should be used preferentially, or that the generated answers should be particularly humorous, scientific, long, or short. In Grok’s case, according to data scientist Jeremy Howard, there are also indications that the chatbot often represents the opinions and statements of xAI owner Elon Musk on controversial topics. However, according to programmer Simon Willison, this could be attributed to Musk’s prominent role.

    Similar trends to those currently seen with Grok can also be observed with other chatbots. DeepSeek also does not answer a number of political questions neutrally. In some cases, the generated answers are deleted shortly after creation and replaced with a “Let’s talk about something else”. Apparently, the bot’s answers are at least somewhat more neutral when using the English version than in the Chinese version. Extensive experiments with DeepSeek reveal a programmed “self-censorship”.

    In Europe, it is not uncommon to equip chatbots with certain ethical standards before they are unleashed upon humanity. For example, our chatbot KOSMO, which is based on a language model from Mixtral, responds politely evasively when it comes to violence and crime. While this behavior is desirable, we believe that objectivity in the presentation of facts should always be ensured. The integrated source verification contributes to this, giving users the opportunity to check and evaluate the sources used.

    A certain bias in language models can never be completely ruled out. A chatbot’s knowledge is only as extensive as its training data and additional information, and its response behavior is often also influenced by user feedback during finetuning. Users themselves can also significantly influence the response behavior through the prompts entered (unconsciously).

    In addition to other factors, the origin of the language model used should therefore also be thoroughly examined before relying too heavily on the correctness of the answers.

  • OUR SPLIT BLOG IN JUNE: is AI Changing Academic Exams?

    OUR SPLIT BLOG IN JUNE: is AI Changing Academic Exams?

    This month, we look into the future and address the question of how AI will impact examinations at universities and schools. This topic suggestion comes from our working student Vincent, who is currently completing an exchange semester in Sweden.

    Reports of AI-generated work by pupils and students are becoming more frequent. Increasingly, the question is being discussed in the media how educational institutions are supposed to identify which texts were actually created by humans. Despite some indications, such as specific phrasings, writing styles, and above-average flawlessness, it is already difficult to determine beyond doubt whether a particular text truly originates from a human. With increasingly improving language models and prompting methods (e.g., “Formulate as humanly as possible and include errors”), unambiguous detection will become progressively more difficult. This is a major problem, considering that a large part of academic education relies on the creation of texts. Be it for applications, examinations, master’s theses, or term papers – examiners everywhere rely on text-based methods. However, there is a high risk that these examination methods will no longer function reliably in the long term. Detector software, which promises to identify artificially generated texts, can provide clues, but is not reliable enough itself and can often be circumvented with simple means. And particularly alarming: Texts written by non-native speakers are often falsely identified as AI-generated by these programs. The risk of discrimination in selection processes can thereby increase significantly. Especially since it is not only difficult to prove that a text was created by AI, but also that the opposite is true.

    But how can universities and other educational institutions address this? Oral examination procedures could in most cases clearly show whether someone has truly thought for themselves and understood. However, oral examinations are associated with enormous time and personnel expenditure and cannot easily assess the same scope of knowledge as written examinations.

    The majority of educational institutions currently still rely on a straightforward prohibition. However, some institutions are already exploring new approaches. Till Krause from the University of Landshut, for example, allows students to actively use AI as a source – as long as this is clearly indicated. Thus, a precise indication of the language model used and the prompt applied is required. Because despite all the challenges that the use of AI brings to educational institutions, AI offers one thing above all: an incredibly vast wealth of information that can be excellently used for learning and provides a fantastic basis for the development of one’s own ideas and thoughts.

    At the University of Economics in Prague, too, a pragmatic approach is taken to the use of AI. The local Business Administration program will no longer require a traditional bachelor’s thesis starting in autumn 2024. Instead, there will be project work, the results of which will be evaluated. Many consider this approach more sensible and practical than the previous assignments. This is a thoroughly sensible approach, especially for study programs where the primary focus is not on flawless and artful writing. Perhaps this even presents an opportunity to highlight the talents of individuals who, for example, have a spelling disability.

    The fact is, academic examination procedures will have to be changed. AI – similar to other technological means – has already entered the daily lives of pupils and students. Now, methods are needed to assess human knowledge in other ways.

    An absolutely worthwhile podcast on this topic is available here: https://www.ardaudiothek.de/episode/11km-der-tagesschau-podcast/ki-or-not-ki-koennen-wir-ihre-texte-noch-enttarnen/tagesschau/13779441/

  • Here Comes EVA

    Here Comes EVA

    We have exciting news! Since the beginning of the year, we have been working on a new project. Together with four associations from Schleswig-Holstein, we have launched the project “Chatbot meets administration: Intelligent dialogue systems as a future solution for non-profit organizations.”

    Non-profit organizations often face as many bureaucratic hurdles as companies. They are confronted with administrative tasks that take up a large part of the available time. At the same time, these organizations live from the commitment of volunteers and rarely have the resources for expensive software. Chatbots can provide a remedy here and facilitate association work. Be it in the provision of internal association information, in the formulation of cover letters or in the onboarding of new members. Existing knowledge no longer has to be painstakingly researched manually, but can be provided in natural language.

    However, the use of chatbots requires consideration of the AI ​​Regulation and all relevant data protection regulations. Here, too, some obstacles can arise, especially for people who are not experts in the field. With our project idea, we offer non-profit organizations complete data sovereignty and guarantee compliance with all relevant guidelines.

    With our idea, we approached the Civic Innovation Platform program and were able to convince them. Our project will be funded by the Federal Ministry of Labor and Social Affairs until the end of 2026.

    Together with non-profit organizations from a wide range of areas, we are developing an open source solution that will also be available to other associations in the future. With EVA (short for electronic administration assistance), a chat system is being created that is specially tailored to the needs of non-profit associations and organizations. It will be possible to host EVA yourself and thus operate independently of providers and save costs. We receive active support from the Kinderschutzbund Ostholstein, Ostsee-Holstein-Tourismus, the Landesverband der Kleingartenvereine Schleswig-Holsteins and InMotion. Thanks to the diverse fields of activity of these associations, we can take into account the most diverse requirements and needs in the development of EVA and jointly develop a dialogue system that is efficient and tailor-made.

    You can find more information at: https://www.civic-innovation.de/projektfoerderung/foerderprogramme/foerderrunde-2025-26#page-2254

  • OUR MAY SPLITBLOG: HOW AI IS REVOLUTIONIZING CHEMISTRY

    OUR MAY SPLITBLOG: HOW AI IS REVOLUTIONIZING CHEMISTRY

    In May, we are addressing a topic suggested by our ‘Team Lead Development’ Bartosz and the question of how AI can transform chemistry.

    In our daily lives, artificial intelligence is already an integral part of many areas. We work with language models and chatbots and use smart home devices. However, we are often not truly aware of the unforeseen possibilities that arise from the use of AI. Our current topic shows that the applications of AI, however, extend far beyond ‘nice-to-have’ gadgets.

    Those interested in the Nobel Prize awards may have noticed. The 2024 Nobel Prize in Chemistry went half to David Baker and the other half to Demis Hassabis and John Jumper. The work of the three focuses on the design of proteins, as well as the prediction of their structures – with the help of artificial intelligence.

    Proteins are biological macromolecules composed of amino acids. These, in turn, are linked by peptide bonds. Proteins are present in every cell and perform various vital functions in the human body. They serve as hormones, signaling molecules, antibodies, and tissue building blocks.

    The structure of proteins can be divided into several levels.

    Primary Structure: This is the simple sequence of amino acids in the chain. Each amino acid has a specific position, and this sequence is crucial for the protein’s function.

    Secondary Structure: Here, the amino acid chains fold into regular patterns, such as spirals (helices) or folded sheets (beta-sheets). These structures help keep the protein stable.

    Tertiary Structure: The entire amino acid chain further folds into a complex three-dimensional shape. This shape is crucial for the protein’s function, as it determines how the protein interacts with other molecules.

    Quaternary Structure: Some proteins consist of multiple subunits that together form a larger structure. These subunits can assemble into a functional complex.

    Hassabis and Jumper succeeded in developing a model for predicting protein structures called “AlphaFold2”. Both work for Google subsidiary DeepMind. With this model, it is possible to accurately predict the three-dimensional structure of a protein based on its amino acid sequence. Previously, this was only possible with the complex method of X-ray crystallography. With its help, the double helix structure of DNA was deciphered in 1953. However, to perform X-ray crystallography, the protein in question must first be available in crystalline form. Thanks to “AlphaFold2”, the three-dimensional shape of a protein can now be calculated without the need for laboratory analysis.

    Perhaps the most remarkable part of the work is the development of an entirely new type of protein. Something that was previously considered almost impossible. Using the “Rosetta” software, developed under Baker’s leadership back in the 90s, and with the help of “AlphaFold2”, entirely new proteins can now be constructed for the first time. It thus becomes possible to create proteins with specific functions. These could be used, for example, for the targeted development of drugs or vaccines. The creation of new nanomaterials is also conceivable.

    AI has thus opened up entirely new possibilities for protein research, which can significantly influence our future.

  • Smart Funding with the Research Allowance

    Smart Funding with the Research Allowance

    Do you, like us, work on an innovative AI project? Then you have the chance to have the costs for wages, salaries and development partially reimbursed by the tax office. As part of the research allowance, up to 35% of the costs will be reimbursed retroactively.

    As part of the application process, of course, close attention is paid. What technical risks are there or were there, and what exactly is the innovative core of the project? The support of WTSH, the management consultant Roman Charkoi (Pollecon) and the AI network Schleswig-Holstein (KI.SH) was a great help in the application process for us.

    In contrast to other funding opportunities, the application for the research allowance involves comparatively little effort and is also easy to manage for inexperienced startups. The research allowance is a practical way to promote innovative projects in a targeted manner.

    For us at Splitbot, the research allowance is a complete success. We were able to convince the expert jury and, thanks to the funding, we can further advance our project.

    Read more about it here: https://kuenstliche-intelligenz.sh/de/forschungszulage-splitbot