

Yeah, I deliberately did not specify it. But I can image configurations work against some groups. Of course you shouldn’t store your footages at your enemies.
Yeah, I deliberately did not specify it. But I can image configurations work against some groups. Of course you shouldn’t store your footages at your enemies.
There’s only one method that covers also lost footages. Live streaming the media into multiple trustworthy places.
If it is for documentation try docusaurus.
This won’t protect your .env files though, right?
Right, but my machine is safe at least.
It’s possible. For pnpm package cache you need to attach another volume, and another for globally installed packages.
Keep your secrets:
alias npm="docker run -it --rm -v $(pwd):/app -w /app node:latest npm"
So, for example, online editors that store state in huge jsons and has frequent backup can benefit from it. That’s actually great, good luck with it!
By IO heavy I meant db operations or other external requests. When the request handler starts, it waits for the IO to be completed. While it waits, it can accept other requests and so on, so the bottleneck is the IO in my case, not the request parsing.
I imagine it like this (imaginary numbers):
Which case, it wouldn’t matter which http framework to use. However, there are probably other use-cases.
How much overhead does a simple request handler have with Brhama and with Express in ms?
It matters because most of my endpoints are IO heavy. I assume the framework cost is negligible compared to that, and if it is negligible for typical use-case, then what use-cases do you see where it matters most?
With SQL you scale it when it is required by sharding, read replicas, cache layers, and denormalization.
With NoSQL afaik, we have to deal with the scaling from the beginning by keeping the consistency of denormalized data, that has additional code overhead. Is mongoDB different in this regard?
That’s not the point of JSONB. Use normalized tables whenever you can. JSONB allows you to store a document with unknown structure, and it allows you to access that data within SQL.
The concept of understanding implies some form of meta-knowledge about the subject.
That can be solved if you teach it the meta-knowledge with intermediary steps, for example:
prompt: 34*3=
step1: 4*3 + 30*3 =
step2: 12 + 10*3*3 =
step3: 12 + 10*9=
step4: 12 + 90 =
step5: 100 + 2 =
step6: 102
result: 102
It’s hard to find such learning data though, but e.g. claude already uses intermediary steps. It preprocesses your input multiple times. It writes code, runs code to process your input, and that’s still not the final response. Unfortunately, it’s already smarter than some junior developers, and its consequence is worrying.
But LLMs are not simply probabilistic machines. They are neural nets. For sure, they haven’t seen the world. They didn’t learn the way we learn. What they mean by a caterpillar is just a vector. For humans, that’s a 3D, colorful, soft object with some traits.
You can’t expect that a being that sees chars and produces chars knows what we mean by a caterpillar. Their job is to figure out the next char. But you could expect them to understand some grammar rules. Although, we can’t expect them to explain the grammar.
For another example, I wrote a simple neural net, and with 6 neurons it could learn XOR. I think we can say that it understands XOR. Can’t we? Or would you say then that an XOR gate understands XOR better? I would not use the word understand for something that cannot learn. But why wouldn’t we use it for a NN?
Any explanation? If they can write text, I assume they understand grammar. They are definetly skilled in a way. If you do snowboarding, do you understand snowboarding? The word “understand” can be misleading. That’s why I’m asking what’s understanding?
What’s understanding? Isn’t understanding just a consequence of neurons communicating with each other? This case LLMs with deep learning can understand things.
You can try dualbooting a linux distro with an android. I expect it works, but you cannot be sure with phones.
What I meant is that you cannot turn any existing webpages to a basic page with some simple tricks like disabling js. That would be a never-ending fight.
You are the one adding extra complexity
I’m not the one defining the business requirement. I could build a site with true progressive enhancement. It’s just extra work, because the requirement is a modern page with actions, modals, notifications, etc.
There are two ways I can fulfill this. SSR with scripts that feel like hacks. Or CSR. I choose CSR, but then progressive enhancement is now an extra work.
Why is it “impossible to do them reliably” - without js presumably?
What I meant is that you cannot turn any existing webpages to a basic page with some simple tricks like disabling js. That would be a never-ending fight.
It suggests using minimal js, I use react the same way, whatever I can do with css, I do it with css. But I am not going to footgun myself. I start the app with react because at some point I will need react.
I used claude code to migrate a small rust project from raw sql to an ORM. It was next level. In a timespan of a small bug fix I could rewrite the data model. It tested the code, it fixed the errors, I was amazed. I reviewed every change, so I could spot problems like migration would fail with prod data. I wrote a new prompt to fix that and it fixed.
For anybody new to claude code: It’s a tui app where you can log in and write prompts for the project in the current directory. The way it works, it searches files in the project based on the prompt, and it locates the related code sections. So it gathers the context pretty well. It can suggest changes, it can suggest running CLI commands, it can read its output. It reacts to itself. You can accept or intercept and correct it anytime.
I ran it in docker just in case.
In summary, this is a real deal, but of course the code needs to be reviewed. Sometimes, it produces, simply put, unmaintainable code, that shouldn’t be used. Works or not, it should move.