

Is there any information about what model it uses and what the context window size is? I asked it and it avoided the answer (but asking it is not a reliable way to determine this anyway).
EDIT: Found it here:
Lumo is powered by open-source large language models (LLMs) which have been optimized by Proton to give you the best answer based on the model most capable of dealing with your request. The models we’re using currently are Nemo, OpenHands 32B, OLMO 2 32B, and Mistral Small 3. These run exclusively on servers Proton controls so your data is never stored on a third-party platform
That one works. Thanks!