

I feel like it would be easier to help with the original problems that led to these unusual choices.


I feel like it would be easier to help with the original problems that led to these unusual choices.
You can change the sorting to show the new posts. You can also change the list to show posts from all communities from all federated instances (except from banned ones). I mostly find very recent posts.


Downvoting because the title says “best” and I disagree. Apple products have a bunch of drawbacks, I wouldn’t buy them even if the hardware is strong and efficient.
I don’t think it’s ADD. There’s a book called ‘thinking fast and slow’. In that book the psychologist separates the mind functions into two systems. System 1 is for intuition, no effort, fast thinking. System 2 needs effort, slow, but precise. What happens here is that simply people are trying to be efficient with their thinking and they use less system 2 which is required for reading.


I used claude code to migrate a small rust project from raw sql to an ORM. It was next level. In a timespan of a small bug fix I could rewrite the data model. It tested the code, it fixed the errors, I was amazed. I reviewed every change, so I could spot problems like migration would fail with prod data. I wrote a new prompt to fix that and it fixed.
For anybody new to claude code: It’s a tui app where you can log in and write prompts for the project in the current directory. The way it works, it searches files in the project based on the prompt, and it locates the related code sections. So it gathers the context pretty well. It can suggest changes, it can suggest running CLI commands, it can read its output. It reacts to itself. You can accept or intercept and correct it anytime.
I ran it in docker just in case.
In summary, this is a real deal, but of course the code needs to be reviewed. Sometimes, it produces, simply put, unmaintainable code, that shouldn’t be used. Works or not, it should move.


Yeah, I deliberately did not specify it. But I can image configurations work against some groups. Of course you shouldn’t store your footages at your enemies.


There’s only one method that covers also lost footages. Live streaming the media into multiple trustworthy places.
If it is for documentation try docusaurus.
This won’t protect your .env files though, right?
Right, but my machine is safe at least.
It’s possible. For pnpm package cache you need to attach another volume, and another for globally installed packages.
Keep your secrets:
alias npm="docker run -it --rm -v $(pwd):/app -w /app node:latest npm"


So, for example, online editors that store state in huge jsons and has frequent backup can benefit from it. That’s actually great, good luck with it!


By IO heavy I meant db operations or other external requests. When the request handler starts, it waits for the IO to be completed. While it waits, it can accept other requests and so on, so the bottleneck is the IO in my case, not the request parsing.
I imagine it like this (imaginary numbers):
Which case, it wouldn’t matter which http framework to use. However, there are probably other use-cases.


How much overhead does a simple request handler have with Brhama and with Express in ms?
It matters because most of my endpoints are IO heavy. I assume the framework cost is negligible compared to that, and if it is negligible for typical use-case, then what use-cases do you see where it matters most?


With SQL you scale it when it is required by sharding, read replicas, cache layers, and denormalization.
With NoSQL afaik, we have to deal with the scaling from the beginning by keeping the consistency of denormalized data, that has additional code overhead. Is mongoDB different in this regard?


That’s not the point of JSONB. Use normalized tables whenever you can. JSONB allows you to store a document with unknown structure, and it allows you to access that data within SQL.


The concept of understanding implies some form of meta-knowledge about the subject.
That can be solved if you teach it the meta-knowledge with intermediary steps, for example:
prompt: 34*3=
step1: 4*3 + 30*3 =
step2: 12 + 10*3*3 =
step3: 12 + 10*9=
step4: 12 + 90 =
step5: 100 + 2 =
step6: 102
result: 102
It’s hard to find such learning data though, but e.g. claude already uses intermediary steps. It preprocesses your input multiple times. It writes code, runs code to process your input, and that’s still not the final response. Unfortunately, it’s already smarter than some junior developers, and its consequence is worrying.


But LLMs are not simply probabilistic machines. They are neural nets. For sure, they haven’t seen the world. They didn’t learn the way we learn. What they mean by a caterpillar is just a vector. For humans, that’s a 3D, colorful, soft object with some traits.
You can’t expect that a being that sees chars and produces chars knows what we mean by a caterpillar. Their job is to figure out the next char. But you could expect them to understand some grammar rules. Although, we can’t expect them to explain the grammar.
For another example, I wrote a simple neural net, and with 6 neurons it could learn XOR. I think we can say that it understands XOR. Can’t we? Or would you say then that an XOR gate understands XOR better? I would not use the word understand for something that cannot learn. But why wouldn’t we use it for a NN?


Any explanation? If they can write text, I assume they understand grammar. They are definetly skilled in a way. If you do snowboarding, do you understand snowboarding? The word “understand” can be misleading. That’s why I’m asking what’s understanding?
And the uncomfortable question is, why was he moved closer to scala in the first place.
(ok I’m no different, I learned elixir once)