EarlTurlet

joined 1 year ago
[–] EarlTurlet@lemmy.zip 6 points 6 months ago (1 children)

You may be fewer irritated by this with age

[–] EarlTurlet@lemmy.zip 10 points 6 months ago (8 children)

Misusing words like "setup" vs "set up", or "login" vs "log in". "Anytime" vs "any time" also steams my clams.

[–] EarlTurlet@lemmy.zip 12 points 11 months ago

I use Fossil for all of my personal projects. Having a wiki and bug tracker built-in is really nice, and I like the way repositories sync. It's perfect for small teams that want everything, but don't want to rely on a host like GitHub or set up complicated software themselves.

[–] EarlTurlet@lemmy.zip 106 points 1 year ago (16 children)

I had this set up the day it was available in my area. Never got an alert. I find it difficult to believe I wasn't "exposed" during the pandemic, so I assume this didn't really provide much value.

[–] EarlTurlet@lemmy.zip 10 points 1 year ago (1 children)

Google cases always seem hit-or-miss. I just buy the same Spigen case for every phone. I know I like it.

[–] EarlTurlet@lemmy.zip 13 points 1 year ago

This is a good reason to use Dvorak

[–] EarlTurlet@lemmy.zip 3 points 1 year ago

But I'm bi-testual

[–] EarlTurlet@lemmy.zip 6 points 1 year ago

Looks like he realized he left the oven on

[–] EarlTurlet@lemmy.zip 2 points 1 year ago

Got it. So more for data at rest rather than handling the sending too?

SimpleX does file transfer pretty well, not sure about Briar now that I think about it.

 

Poking around the network requests for ChatGPT, I've noticed the /backend-api/models response includes information for each model, including the maximum tokens.

For me:

  • GPT-3.5: 8191
  • GPT-4: 4095
  • GPT-4 with Code Interpreter: 8192
  • GPT-4 with Plugins: 8192

It seems to be accurate. I've had content that is too long for GPT-4, but is accepted by GPT-4 with Code Interpreter. The quality feels about the same, too.

Here's the response I get from /backend-api/models, as a Plus subscriber:

{
    "models": [
        {
            "slug": "text-davinci-002-render-sha",
            "max_tokens": 8191,
            "title": "Default (GPT-3.5)",
            "description": "Our fastest model, great for most everyday tasks.",
            "tags": [
                "gpt3.5"
            ],
            "capabilities": {}
        },
        {
            "slug": "gpt-4",
            "max_tokens": 4095,
            "title": "GPT-4",
            "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.",
            "tags": [
                "gpt4"
            ],
            "capabilities": {}
        },
        {
            "slug": "gpt-4-code-interpreter",
            "max_tokens": 8192,
            "title": "Code Interpreter",
            "description": "An experimental model that can solve tasks by generating Python code and executing it in a Jupyter notebook.\nYou can upload any kind of file, and ask model to analyse it, or produce a new file which you can download.",
            "tags": [
                "gpt4",
                "beta"
            ],
            "capabilities": {},
            "enabled_tools": [
                "tools2"
            ]
        },
        {
            "slug": "gpt-4-plugins",
            "max_tokens": 8192,
            "title": "Plugins",
            "description": "An experimental model that knows when and how to use plugins",
            "tags": [
                "gpt4",
                "beta"
            ],
            "capabilities": {},
            "enabled_tools": [
                "tools3"
            ]
        },
        {
            "slug": "text-davinci-002-render-sha-mobile",
            "max_tokens": 8191,
            "title": "Default (GPT-3.5) (Mobile)",
            "description": "Our fastest model, great for most everyday tasks.",
            "tags": [
                "mobile",
                "gpt3.5"
            ],
            "capabilities": {}
        },
        {
            "slug": "gpt-4-mobile",
            "max_tokens": 4095,
            "title": "GPT-4 (Mobile, V2)",
            "description": "Our most capable model, great for tasks that require creativity and advanced reasoning.",
            "tags": [
                "gpt4",
                "mobile"
            ],
            "capabilities": {}
        }
    ],
    "categories": [
        {
            "category": "gpt_3.5",
            "human_category_name": "GPT-3.5",
            "subscription_level": "free",
            "default_model": "text-davinci-002-render-sha",
            "code_interpreter_model": "text-davinci-002-render-sha-code-interpreter",
            "plugins_model": "text-davinci-002-render-sha-plugins"
        },
        {
            "category": "gpt_4",
            "human_category_name": "GPT-4",
            "subscription_level": "plus",
            "default_model": "gpt-4",
            "code_interpreter_model": "gpt-4-code-interpreter",
            "plugins_model": "gpt-4-plugins"
        }
    ]
}

Anyone seeing anything different? I haven't really seen this compared anywhere.

view more: next ›