What is this “growing up” you talk about?
What is this “growing up” you talk about?
As for 2. You assume that there is an objective reality free from emotion? There might be, but I am unsure if it can be perceived by anything living. Or AI, for that matter. It is after all, like you said, trained on human data.
Anyways, time will tell if Openai is correct in their assessment, or if humans will want the human touch. As a tool for trained professionals to use, sure. As a substitute for one? I’m not convinced yet.
I disagree with your first statement. Law is about the application of rules, not the rules themselves. In a perfect world, it would be about determining which law has precedence in matter at hand, a task in itself outside of AI capabilities as it involves weighing moral and ethical principles against eachother, but in reality it often comes down to why my interpretation of reality is the correct one.
I’m just in the beginning, but my plan is to use it to evaluate policy docs. There is so much context to keep up with, so any way to load more context into the analysis will be helpful. Learning how to add excel information in the analysis will also be a big step forward.
I will have to check out Mistral:) So far Qwen2.5 14B has been the best at providing analysis of my test scenario. But i guess an even higher parameter model will have its advantages.
Thank you! Very useful. I am, again, surprised how a better way of asking questions affects the answers almost as much as using a better model.
But per the definition given involving negative mass, it should be “meassurable mass in the presence of exotic matter”. Anywho…
I need to look into flash attention! And if i understand you correctly a larger model of llama3.1 would be better prepared to handle a larger context window than a smaller llama3.1 model?
Thanks! I actually picked up the concept of context window, and from there how to create a modelfile, through one of the links provided earlier and it has made a huge difference. In your experience, would a small model like llama3.2 with a bigger context window be able to provide the same output as a big modem L, like qwen2.5:14b, with a more limited window? The bigger window obviously allow more data to be taken into account, but how does the model size compare?
Thank you for your detailed answer:) it’s 20 years and 2 kids since I last tried my hand at reading code, but I’m doing my best to catch up😊 Context window is a concept I picked up from your links which has provided me much help!
The problem I keep running into with that approach is that only the last page is actually summarised and some of the texts are… Longer.
Do you know of any nifty resources on how to create RAGs using ollama/webui? (Or even fine-tuning?). I’ve tried to set it up, but the documents provided doesn’t seem to be analysed properly.
I’m trying to get the LLM into reading/summarising a certain type of (wordy) files, and it seems the query prompt is limited to about 6k characters.
I never said anyone did, i was just reminded about a joke describing how some actions overshadows other activities when retold among peers.
Which reminds me of that old joke: “but you fuck one coach…”
Edit: just spotted my typo… I’ll leave it in😊
Well, that’s been the basis for some other products. AMD and Intel comes to mind😊 They both have IP the other need and historically Intel has been the dominant one, but now the tables have turned somewhat.
Well, even 84% sunshine transmitted is still rather shiny and is more than what is available at night.
Exactly. His followers expect there to have been an audience and those fictional people should have been crazy about it…
And to make it worse, they are none too concerned with facts to begin with.
Oh, it’s funny because it’s true.
Now I need to go cry in a corner, because it’s true
Anything that is more about talking to different parties rather than documenting and being the one to deliver. the more specialised people the better you connect, the bwtter. They will love your ability to see the patterns of the work place, your helicopter perspective. That will help them to test their ideas, to understand the concepts and what their task is all about. They will also love that you will not micro manage (as long as you dont end up hyperfocusing on their topic) and let them do their thing.
Don’t be the specialist. Don’t be the one that tries to have an eye on all the details, all the numbers. I tried to be an accountant for a while…
Had blocking news and access to information been in the cards, as you describe, there would be another discussion. This is not it. The closest this comes is to block a linkaggregator. One that has been deemed to violate the laws in its area of business and being reluctant to take steps to rectify the situation.
This being the supreme court doing it does bring up the question of democratic decision making, which famiously has been proven by other countries recently. Although they also gave their president the power to remove themselves from office, if I’ve understood that particular debacle correctly.
My hope, and my belief, is that the switch to greener options has started and might not be easily stopped. EUs fit for 55 is a big deal and on the transportation side we see electrics making inroads in the market in a rather big way. Gas prices has plummeted and since production hasn’t gone up, it’s just demand side left.
On the construction side if things green heating options has diversified, come down in price and with local low temperature heat storage solutions might be even cheaper and less power hungry.
The only fly in the ointment is that we need to describe it as “increasing resilience”, “cutting cost” and “decreasing dependency on over seas deliveries”. As long as nobody mention “the inveronment” as the reason to do something.