Skip to content

Commit

Permalink
Merge branch 'master' into tofuSCHNITZEL-patch-1
Browse files Browse the repository at this point in the history
  • Loading branch information
tofuSCHNITZEL authored Sep 13, 2024
2 parents 9e86b7c + b295229 commit 4a70eb2
Show file tree
Hide file tree
Showing 101 changed files with 1,009 additions and 816 deletions.
51 changes: 0 additions & 51 deletions .github/workflows/command-rebase.yml

This file was deleted.

2 changes: 1 addition & 1 deletion .github/workflows/fixup.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,6 @@ jobs:

steps:
- name: Run check
uses: skjnldsv/block-fixup-merge-action@42d26e1b536ce61e5cf467d65fb76caf4aa85acf # v1
uses: skjnldsv/block-fixup-merge-action@c138ea99e45e186567b64cf065ce90f7158c236a # v2
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
2 changes: 1 addition & 1 deletion .github/workflows/generate_catalog_templates.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ jobs:
pre-build-command: pip install -r requirements.txt
build-command: make gettext

- uses: peter-evans/create-pull-request@v6
- uses: peter-evans/create-pull-request@v7
id: cpr
with:
token: ${{ secrets.COMMAND_BOT_PAT }}
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/sphinxbuild.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ jobs:
shell: bash
run: tar czf /tmp/documentation.tar.gz -C user_manual/_build/html .
- name: Upload static documentation
uses: actions/upload-artifact@v4.3.6
uses: actions/upload-artifact@v4.4.0
with:
name: User manual.zip
path: "/tmp/documentation.tar.gz"
Expand Down Expand Up @@ -55,7 +55,7 @@ jobs:
shell: bash
run: tar czf /tmp/documentation.tar.gz -C developer_manual/_build/html/com .
- name: Upload static documentation
uses: actions/upload-artifact@v4.3.6
uses: actions/upload-artifact@v4.4.0
with:
name: Developer manual.zip
path: "/tmp/documentation.tar.gz"
Expand All @@ -75,7 +75,7 @@ jobs:
shell: bash
run: tar czf /tmp/documentation.tar.gz -C admin_manual/_build/html/com .
- name: Upload static documentation
uses: actions/upload-artifact@v4.3.6
uses: actions/upload-artifact@v4.4.0
with:
name: Administration manual.zip
path: "/tmp/documentation.tar.gz"
4 changes: 1 addition & 3 deletions admin_manual/ai/app_assistant.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,8 @@ Machine translation

In order to make use of machine translation features in the assistant, you will need an app that provides a translation backend:

* :ref:`translate<ai-app-translate>` - Runs open source AI translation models locally on your own server hardware (Customer support available upon request)
* :ref:`translate2 (ExApp)<ai-app-translate2>` - Runs open source AI translation models locally on your own server hardware (Customer support available upon request)
* *integration_deepl* - Integrates with the deepl API to provide translation functionality from Deepl.com servers (Only community supported)
* *integration_libretranslate* - Integrates with the open source LibreTranslate API to provide translation functionality hosted commercially or on your own hardware (Only community supported)

Speech-To-Text
~~~~~~~~~~~~~~
Expand All @@ -69,7 +67,7 @@ Text-To-Image

In order to make use of Text-To-Image features, you will need an app that provides an image generation backend:

* text2image_stablediffusion2 (Customer support available upon request)
* text2image_stablediffusion (Customer support available upon request)
* *integration_openai* - Integrates with the OpenAI API to provide AI functionality from OpenAI servers (Customer support available upon request; see :ref:`AI as a Service<ai-ai_as_a_service>`)
* *integration_replicate* - Integrates with the replicate API to provide AI functionality from replicate servers (see :ref:`AI as a Service<ai-ai_as_a_service>`)

Expand Down
3 changes: 2 additions & 1 deletion admin_manual/ai/app_context_chat.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Together they provide the ContextChat text processing tasks accessible via the :

The *context_chat* and *context_chat_backend* apps will use the Free text to text task processing providers like OpenAI integration, LLM2, etc. and such a provider is required on a fresh install, or it can be configured to run open source models entirely on-premises. Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities.

This app supports input and output in languages other than English if the language model supports the language.
This app supports input and output mainly in English, other languages may work if the language model supports the language, but are currently not guaranteed to produce good results.

Requirements
------------
Expand Down Expand Up @@ -100,6 +100,7 @@ Known Limitations
-----------------

* Language models are likely to generate false information and should thus only be used in situations that are not critical. It's recommended to only use AI at the beginning of a creation process and not at the end, so that outputs of AI serve as a draft for example and not as final product. Always check the output of language models before using it.
* Context Chat is not integrated into the Chat UI of assistant app, at the moment, but has it's own interface in the assistant modal
* Make sure to test this app for whether it meets your use-case's quality requirements
* Customer support is available upon request, however we can't solve false or problematic output, most performance issues, or other problems caused by the underlying model. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI)
* Nextcloud usernames can be only 56 characters long. This is a limitation of the vector database we use (Chroma DB) and will be fixed soon.
14 changes: 14 additions & 0 deletions admin_manual/ai/app_llm2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,22 @@ This app uses `ctransformers <https://github.com/marella/ctransformers>`_ under
* `Llama3 8b Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3-8B-Instruct-GGUF>`_ (reasonable quality; fast; good acclaim; multilingual output may not be optimal)
* `Llama3 70B Instruct <https://huggingface.co/QuantFactory/Meta-Llama-3-70B-Instruct-GGUF>`_ (good quality; good acclaim; good multilingual output)

Multilinguality
---------------

This app supports input and output in languages other than English if the underlying model supports the language.

Llama 3.1 `supports the following languages: <https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct#multilingual-benchmarks>`_

* English
* Portuguese
* Spanish
* Italian
* German
* French
* Hindi
* Thai

Requirements
------------

Expand Down
2 changes: 2 additions & 0 deletions admin_manual/ai/app_stt_whisper2.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ This app uses `faster-whisper <https://github.com/SYSTRAN/faster-whisper>`_ unde
* OpenAI Whisper large-v2 or v3 (multilingual)
* OpenAI Whisper medium.en (English only)

Whisper large v3 supports about ~100 languages and shows outstanding performance in ~10 of them. For more details see the `OpenAI Whisper paper <https://cdn.openai.com/papers/whisper.pdf>`_

Requirements
------------

Expand Down
137 changes: 137 additions & 0 deletions admin_manual/ai/app_summary_bot.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,137 @@
==========================================
App: Summary Bot (Talk chat summarize bot)
==========================================

.. _ai-app-summary-bot:

The *Summary Bot* app utilizes Large Language Model (LLM) providers in Nextcloud and can be added to a conversation in `Nextcloud Talk` to generate summaries from the chat messages of that room either on-demand or following a schedule.
It can run on only open source or proprietary models either on-premises or in the cloud leveraging apps like `Local large language model app <https://apps.nextcloud.com/apps/llm2>`_ or `OpenAI and LocalAI integration app <https://apps.nextcloud.com/apps/integration_openai>`_.

Nextcloud can provide customer support upon request, please talk to your account manager for the possibilities.

The app currently supports the following languages:

* English (en)

The quality of summaries depends directly on the quality of the underlying model. It is recommended to test the model for the desired use-case before applying it.

Requirements
------------

* Minimal Nextcloud version: 30
* Docker
* AppAPI >= 3.0.0
* Talk
* Task Processing Provider like Local large language model app (llm2) or OpenAI and LocalAI integration app (integration_openai)

Space usage
~~~~~~~~~~~

* ~100MB

Installation
------------

0. Make sure the following apps are installed:

- `Nextcloud AppAPI app <https://apps.nextcloud.com/apps/app_api>`_

- `Nextcloud Talk app (Spreed) <https://apps.nextcloud.com/apps/spreed>`_

- One of the following AI model providers:

- `Nextcloud Local large language model app <https://apps.nextcloud.com/apps/llm2>`_

- `Nextcloud OpenAI and LocalAI integration app <https://apps.nextcloud.com/apps/integration_openai>`_


Setup (via App Store)
~~~~~~~~~~~~~~~~~~~~~

1. Install the *Summary Bot* app via the "External Apps" page in Nextcloud

2. Enable the *Summary Bot* Bot for the selected Chatroom via the three dots menu of the Chatroom (The Bots settings are located inside the *Bots* section)

Setup (Manual)
~~~~~~~~~~~~~~

After cloning this app *manually* (cloned via git to your apps directory) you will need to execute the following steps:

1. Change to the folder you have cloned the source to:
.. code-block::
cd /path/to/your/nextcloud/webroot/apps/summarai/
2. Build the docker image:
.. code-block::
docker build --no-cache -f Dockerfile -t local_summarai .
3. Run the docker image:

*Info:*

- APP_VERSION environment variable should be equal to the version of the *Summary Bot* you are using

- NEXTCLOUD_URL environment variable must be set to your Nextcloud instance's URL, ensuring it's reachable by the docker image.

.. code-block::
sudo docker run -ti -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro -e APP_ID=summarai -e APP_DISPLAY_NAME="Summary Bot" -e APP_HOST=0.0.0.0 -e APP_PORT=9031 -e APP_SECRET=12345 -e APP_VERSION=1.0.0 -e NEXTCLOUD_URL='<YOUR_NEXTCLOUD_URL_REACHABLE_FROM_INSIDE_DOCKER>' -p 9031:9031 local_summarai
4. Un-register the Summary Bot if its already installed

.. code-block::
sudo -u <the_user_the_webserver_is_running_as> php /path/to/your/nextcloud/webroot/occ app_api:app:unregister summarai
5. Register the Summary Bot so that your Nextcloud instance is aware of it

*Info:* Adjust the host value in the following example to the IP address of the docker container (for added security)

.. code-block::
sudo -u <the_user_the_webserver_is_running_as> php ./occ app_api:app:register summarai manual_install --json-info '{ "id": "summarai", "name": "Summary Bot", "daemon_config_name": "manual_install", "version": "1.0.0", "secret": "12345", "host": "0.0.0.0", "port": 9031, "scopes": ["AI_PROVIDERS", "TALK", "TALK_BOT"], "protocol": "http"}' --force-scopes --wait-finish
6. Enable the *Summary Bot* for the selected Chatroom via the three dots menu of the Chatroom (The Bots settings are located inside the *Bots* section)

Usage
-----

After enabling the *Summary Bot* in a Chatroom, you can test its functionality by simply sending the message below:

"@summary" or "@summary help"

App store
---------

You can also find the app in our app store, where you can write a review: `<https://apps.nextcloud.com/apps/summarai>`_

Repository
----------

You can find the app's code repository on GitHub where you can report bugs and contribute fixes and features: `<https://github.com/nextcloud/sumupbot>`_

Nextcloud customers should file bugs directly with our Customer Support.

Ethical AI Rating
-----------------

The ethical rating of the *SummarAI Bot*, which utilizes a model for text processing through the Nextcloud Assistant app, is significantly influenced by the choice and implementation of the underlying model.

Learn more about the Nextcloud Ethical AI Rating `in our blog<https://nextcloud.com/blog/nextcloud-ethical-ai-rating/>`.

Known Limitations
-----------------

* The Summary Bot cannot access previous conversations, it only recognizes messages from the moment it was enabled in the chatroom.
* Summary of maximum 40000 characters is supported. This assumes the underlying model can handle this amount of text (which should be close to 16000 context length).
* Languages other than English are not supported. The underlying model may still be able to understand other languages.
* AI models may occasionally produce inaccurate information. Therefore, they should be employed with caution in non-critical scenarios. It's essential to verify the accuracy of the bot's output before application.
* Be aware that AI models can consume a significant amount of energy. It's advisable to consider this factor in the planning and operation of AI systems if hosted on-premises or sustainability is a concern.
* AI models can exhibit extended processing times when run on CPUs. For enhanced efficiency, utilizing GPU support is recommended to expedite request handling.
* Customer support is available upon request, however we can't solve false or problematic output (hallucinations), most performance issues, or other problems caused by the underlying models. Support is thus limited only to bugs directly caused by the implementation of the app (connectors, API, front-end, AppAPI)
Loading

0 comments on commit 4a70eb2

Please sign in to comment.