Why I created this website

I'm a 3D artist who primarily uses AI to code scripts for my distributed render software and using it for enhancing my creativity. I grew tired of relying on large companies like OpenAI and Anthropic for AI, and then I discovered I could run models on my own computer. Fortunately, I already had a powerful computer and a small render farm, giving me the hardware necessary to experiment with AI. I followed a few tutorials to learn more about AI models and downloaded Ollama because of its easy setup. I even explored models beyond text-to-text, like image-to-image and image + text, to help inspire changes and additions to my work. I decided to create a privacy focused website that anyone could use, as that's a key differentiator, large companies often use user interactions to fingerprint individuals and train their AI. That single choice of privacy is what will make this website made by one guy in a basement take the marketshare of users that value privacy

Frequently Asked Questions (FAQ)

Here are some common questions and answers about our service:

1. Are there limits for models?

Yes, there are limits for the larger models to make it more fair to all users and so I can have more users run off of one server.

2. What are the limits?

One question per 120 seconds and 20 questions per day for any large model. That means 10 image_large and 10 text_large requests—you would hit the limit if exceeded.

3. Is there a way to remove limits?

By me hosting a model for you you can get around the limits but this is only recommended for users that use this hunderds of times a day or a buissness.

4. Is there a queue system?

For now, there is a single queue system so the wait times can be quite long.

5. Why are my chats not saved after reloading the tab?

This exists for two reasons:

6. What hardware do you use for running the models?

I am using an NVIDIA RTX 3090 for text generation and a NVIDIA RTX 4070 as a backup for longer context lengths.

7. Will you get better hardware to improve the speed and devolop larger models?

For now I am unable to buy better or newer hardware for now, but later on once I start hosting custom models I will start buying more GPUs

8. Why incude local in the name?

I want the user the get the same privacy level as running AI models locally on their computer.

9. Will you host uncennsored models and or models that can activly search the web for better answers?

No, I don't want the be responsible for anything that you do.

10. What data do we collect?

We collect your IP address and hold it in RAM for 24 hours solely for the purpose of rate limiting. Your chat ID links back to what you asked and the AI response is removed when you close or reload the tab. Everything is stored in RAM and regularly cleared every day. We also collect a permanent CSV file that includes information such as how long it took to process the AI inference, the number of characters in the prompt and output, and which models were used. This CSV file does not store the actual prompt or response nor your IP address. The CSV file contains no data that could be traced back to you or your interactions. Instead, it is used for planning and answering questions like: when are users most active, which models do users prefer, how does prompt length correlate with processing time, and how does processing time correlate with output length

10. Where can I contact you with questions, suggestions, concerns, vulnerabilities?

You can contact me at local.axiom.ai@gmail.com.


← Back to Chat