- name: jan-ai
- description: Jan is an open source alternative to ChatGPT that runs 100% offline on your computer
- snapcraft: The app is build with electron-builder. I don’t see any snapcraft.yaml. This is done under the hood of electron-builder.
- upstream: GitHub - janhq/jan: Jan is an open source alternative to ChatGPT that runs 100% offline on your computer
- upstream-relation: maintainer of the Jan
- supported-category: tools for local, non-root user driven configuration of/switching to development workspaces/environments
- reasoning: The application requires access to utilize GPU acceleration for LLM (Large Language Model) inference. While I have tested the app with existing interfaces like gpu-control and hardware-observe, these interfaces are not sufficient as the application fails to detect the GPU under strict confinement. When testing with classic confinement, the GPU is properly detected and the application functions as intended, providing the necessary acceleration for running large language models locally
I understand that strict confinement is generally preferred over classic.
I’ve tried the existing interfaces to make the snap to work under strict confinement.