SwordOfDarkness
New Member
- Joined
- Dec 8, 2021
- Messages
- 2,776
- Likes
- 11,803
See, those sort of byte by byte things are specifically not used precisely because it is very resource intensive. Generally we use CNNs to bring down the computational power required by a lot, i.e instead of doing it bit by bit (which wont hold much info anyway) we look at different small areas of the picture, and combine the indicators we get from them.Arreh bhai - youre talking of ML. Not true AI.
YOLO and pytorch based segmentation algorithm will run on your run of the mill gaming system,
AI wont. I am talking of chatGPT/LLM level coding. Literally 1000s of TBs of data, all fed to thousands of discrete GPUs that go over every byte, every pixel matrix and find patterns where human brains cant see shit.
That needs a tonne of money and space and powr.
ChatGPT is a LLM, very different from surveillance systems. LLM has to have a general understanding of millions of concepts, so size balloons for the dataset. Naval surveillance systems have a more specialised job; Differentiating between different types of boats/ships etc could be done with ~50,000 pictures at a rough guess.
Also, AI is the broader concept of "machine making decisions by itself", but nowadays all of AI functions are performed using ML (Machine Learning, which covers all algorithms where the machine learns on its own vs being fed instructions), though AI also includes things like rules programmed by human experts (like the oldest chess engines).
For cost, one of the common GPUs for advanced research in AI is NVIDIA DGX-1, which costs ~80 lakhs. FOr general research much cheaper options are fine.