Also a PhD is an expert in a hyper specific niche area of their specialty. Would I trust someone who has a PhD in astrophysicist with an expertise on black holes. When it comes to talking about black holes? Yes.
Would I trust that person to give me medical advice? Probably not. Would I trust them to help me show basic car maintenance? Maybe and only because they have experience with car maintenance not because of a PhD.
Maybe not replace, but some flavour of AI is already pretty good at analyzing patterns on x-ray images and stuff like that which might be significant help to doctors in the future. Obviously not the glorified autocorrect Altman is running with hype-money, but actually useful neural network things (or whatever they really are, I’m not one building them).
Narrow models trained on a task specific data set tend to be very good at their specialization. So protien folding, or material sciences have benefitted from machine learning, but we shouldn’t mistake that for being the same thing as chatGPT.
One of the bigger problems we have with AI at the moment (in my very inexpert opionion) is that they seem to be trying to throw LLMs at every problem and swearing that it’ll achieve AGI soon.
Meanwhile Alpha Fold is more closely related to stable diffusion than it is to ChatGPT.
probably not, it will like scrape from sources that arnt even based on research, or research papers, if its allowed to use the internet it will probably process opinions too.
To obtain a PhD, you need to contribute something original to your field of study, not just regurgitate what you’ve scraped from other studies.
Also a PhD is an expert in a hyper specific niche area of their specialty. Would I trust someone who has a PhD in astrophysicist with an expertise on black holes. When it comes to talking about black holes? Yes.
Would I trust that person to give me medical advice? Probably not. Would I trust them to help me show basic car maintenance? Maybe and only because they have experience with car maintenance not because of a PhD.
ceos are so obsessed with this and thinking it can replace doctors in diagnosing people too.
Maybe not replace, but some flavour of AI is already pretty good at analyzing patterns on x-ray images and stuff like that which might be significant help to doctors in the future. Obviously not the glorified autocorrect Altman is running with hype-money, but actually useful neural network things (or whatever they really are, I’m not one building them).
Narrow models trained on a task specific data set tend to be very good at their specialization. So protien folding, or material sciences have benefitted from machine learning, but we shouldn’t mistake that for being the same thing as chatGPT.
One of the bigger problems we have with AI at the moment (in my very inexpert opionion) is that they seem to be trying to throw LLMs at every problem and swearing that it’ll achieve AGI soon.
Meanwhile Alpha Fold is more closely related to stable diffusion than it is to ChatGPT.
If it spouts out enough nonsense something will be right eventually
GPT5 is gonna monkey a new hamlet? ITS GONNA BE THE BLURST OF TIMES?
You stupid AI.
I guess given enough hardware and environmental destruction everything is possible.
I wonder whether any LLMs are any good at hypothesis generation
1000 monkeys with typewriters comes to mind
probably not, it will like scrape from sources that arnt even based on research, or research papers, if its allowed to use the internet it will probably process opinions too.