Indian Air Force: News & Discussions

Advaidhya Tiwari

Senior Member
Joined
Aug 2, 2018
Messages
1,579
Likes
1,443
And just as an FYI, F-22s have gotten a lot of upgrades, just nothing that is major, or has redefined their roles.
F22 article only says software upgrade. That is irrelevant
I specifically mentioned the PowerPC G5.
Here is the Raytheon press release :

http://investor.raytheon.com/phoenix.zhtml?c=84193&p=irol-newsArticle&ID=439701

The original lots (I think up to lot 5) used the Intel i960MX which was deemed outdated. Later batches used the Raytheon CIP. This thing uses the PowerPC G5, aka PowerPC 970

Another source for this :
https://www.militaryaerospace.com/a...electronics-but-plan-for-future-upgrades.html

Here is a source for the PowerPC G5 / 970 : https://en.wikipedia.org/wiki/PowerPC_970
Not good source. It says at best power PC G5 as that was the latest release. But military items are generally using at least 3-4 years older items as they have to be modified into military requirement. So, your article of 2003 can only mean that F22 has older than 180nm chips. This is a guarantee. No one source of your has confirmed that G5 is used decisively.

Yes DRAL got only 1000 Cr. I am not disputing this. I am disputing that the rest will go into Uttam for Rafale. see my links. The Hindu actaully has a time table on when MBDA, Thales, Dassault will release parts of their offset, and to which company. The first year offsets of MBDA and Thales are 0. They pick up later.
That is a secret offset clause and I am speculating based on my reasoning. I have told you that India does not need imported items but only wants to enhance its own industrial base. So, the import of rafale makes no sense to me at all.

Yes not one processor for all. However doing something like DAS is very compute heavy. Why do you think we didn;t see AR/VR systems until 2015? Even then they demanded the very best CPU + GPU combo for a good amount of time. DAS is in some ways an AR system. It takes input from its sensors and then paints this world onto a helmet. It needs to track the user's helmet movements and show them the respective area they want. It also needs to overlay other information onto this, like target details, on what it has identified. This is very much compute intensive.
I agree that processing is needed in high speed but what you don't understand is that architecture is what matters the most, not the size of node. It is the highly specialised processor developed for parallel computing is what matters the most. As long as a country can develop that, the power consumption of 50-60 watt extra consumption becomes irrelevant.

Russia, for example, use analog processor which is highly effective in military applications due to its ease of using in EM waves which are analog in nature. That is the reason why its S400 and other equipment is extremely advanced. India, having been with USSR may even have learnt to make analog chips which work in much different manner than the digital chips of west. Indian Su30 uses India made processor which may mean that India already has advanced in analog domain.

Huh, so you think that all those liquid cooled set ups are fake? You do realise that liquid cooling is even done on some high end gaming rigs? And your Wikipedia figure says 125C is the broadly accepted value. Lets take that as the value needed. Show me evidence that a 22nm node can't reach that temperature and work. I've shown evidence that commercial CPUs can operate at 75C just fine. The emergency shutdown for commercial Intel CPUs is 95-100C. So that means they will work even at 90C, just very close to their limit.
Every processor works till 90-100 celsius, even civilian ones. Even mobile processor without any cooling works till 65-70 celsius. I am not so sure of wikipedia article as there is very low amount of research material on the internet on semiconductor property. I am finding it extremely hard to get any proper research material on semiconductor properties at all.

I am writing this from a laptop that was made in 2013. I've used it for at least 2 hours a day on average, with actual usage well above that. I've pushed its CPU and GPU to the limits with games and encoding various movies. Its CPU and GPU were made on the 22nm node. By my calculations, its roughly 6 years old.

I also own a PSP from 2006 that works just fine. A desktop from 2009 that's still functional. A laptop from 2002 that works just fine with linux. A Gameboy. I can go about systems that I have which are older than 3 years. The lab in my school had computers that were at least 5 years old. They all were used for 5hrs a day 5 days a week for 9 months of the year. They were replaced because they became obsolete. Not because they failed.

Yes transistor degradation is a thing. But it is irrelevant because of how long it takes. You seem to think that it affects a lot of systems. It doesn't affect anything. You will see mechanical parts like cooling fans fail long before the transistors fail. If you don't you ended up with a a bad chip. Nothing to do with node.
Actually i never said that laptop or computer chips will get damaged because it is 22nm or lower. I am only saying that the transistors get degraded and damaged. Being 22nm or higher, all will get damaged one day or another. I was only referencing to this process, nothing more.

Again give source that 22nm physically can't work at 125C, but other higher nodes can.
The question is why has this never been observed anywhere? I've never had any such issues with any of the equipment I've owned, nor have I heard of it anywhere. Your won sources don't mention it.
There is this fancy technology called liquid cooling. It keeps things within an acceptable temperature range.
Also, here is proof that a 32nm chip can work just fine at 75C : https://www.anandtech.com/show/6023/the-nextgen-macbook-pro-with-retina-display-review/12
That is consumer grade. And air cooled. Those systems are rated for 95-100C. Not a stretch to believe that its possible to achieve 125C for military needs.
Source that 22nm can't handle this? Don't claim stuff cite sources.
I must admit that finding exact reference in internet is extremely hard for semiconductor properties. I have been searching for temperature vs node size but I am unable to get any research paper at all. Considering that only 2 countries - US (Japan, Korea, Taiwan are USA vassal state controlled by USA military) & China have the ability to make and test semiconductor of lower nodes, I am not surprised. So, I won't make that claim anymore as I am unable to ind proper R&D paper that either roves or disproves my point.

Nevertheless, as I said above, the architecture is what matters the most in computation where the power availability is not limiting factor. Simply keeping parallel processors will solve all the problem that might be caused by reduced computation power. So, as long as India has good architecture, it should be good enough practically.
 

gryphus-scarface

Regular Member
Joined
Apr 20, 2019
Messages
148
Likes
123
Country flag
F22 article only says software upgrade. That is irrelevant

Not good source. It says at best power PC G5 as that was the latest release. But military items are generally using at least 3-4 years older items as they have to be modified into military requirement. So, your article of 2003 can only mean that F22 has older than 180nm chips. This is a guarantee. No one source of your has confirmed that G5 is used decisively.
Yes I cited a bad source. You can however see the Raytheon press release which confirms the move to the Raytheon CIP instead. The Raytheon CIP is a PowerPC 970 derivative. We can tell based on its specification : It has a rating of 10.5GFLOPS. source : https://www.globalsecurity.org/military/systems/aircraft/f-22-avionics.htm

So based on this, we can infer that it is a PowerPC derivative. 10.5 GFLOPS is massive. For reference, newer integrated graphics achieved TFLOPS only in 2014-15. So it took 10 years to achieve 10x improvement.

Other sources of interest :

https://forums.anandtech.com/threads/does-nasa-still-use-386s.2355160/page-2

http://datasheet.digchip.com/205/205-00367-0-970.pdf (PowerPC 970 spec, it notes a much lower GFLOPS of about 9.7, but I suspect this was overcome with late variants of the 970)

More on the F-22 : https://www.forecastinternational.com/archive/disp_pdf.cfm?DACH_RECNO=942

It seems that the third unused slot was filled not too long ago, because additional processing capability was needed.

That is a secret offset clause and I am speculating based on my reasoning. I have told you that India does not need imported items but only wants to enhance its own industrial base. So, the import of rafale makes no sense to me at all.
Read the articles I cited. For reference here:

https://www.livefistdefence.com/201...of-frances-e4-billion-india-offsets-plan.html

It lists the companies which get offsets, and list of joint ventures being formed.

https://www.thehindu.com/news/natio...-agreements/article26775545.ece?homepage=true

This article spreads the same old "Rafale scam" BS, but if you scroll down, you will see a table, with numbers listing the percentage of offsets that will be invested each year, by each company (Dassault, Thales, MBDA, Scenma)

I agree that processing is needed in high speed but what you don't understand is that architecture is what matters the most, not the size of node. It is the highly specialised processor developed for parallel computing is what matters the most. As long as a country can develop that, the power consumption of 50-60 watt extra consumption becomes irrelevant.
And processor architecture is a function of the node it is on. You can't make a reasonable 22nm CPU on a 180nm node. Node is important in architecture design.

Russia, for example, use analog processor which is highly effective in military applications due to its ease of using in EM waves which are analog in nature. That is the reason why its S400 and other equipment is extremely advanced. India, having been with USSR may even have learnt to make analog chips which work in much different manner than the digital chips of west. Indian Su30 uses India made processor which may mean that India already has advanced in analog domain.
If you mean analog processing, that is very different. Analog processing is by no means superior, it is inferior. Digital processing is pure, and does not affect the input signals. So you can do more processing. There are very few things that are actually better in analog, almost all of which are done in analog. You see to think that the US doesn't have analog technology. They do, infact every computer has a DAC and a DAC.

Every processor works till 90-100 celsius, even civilian ones. Even mobile processor without any cooling works till 65-70 celsius. I am not so sure of wikipedia article as there is very low amount of research material on the internet on semiconductor property. I am finding it extremely hard to get any proper research material on semiconductor properties at all.
You won't find free articles because all of this is bleeding edge and very costly research. If just setting up a fab is billions of dollars imagine the cost of the research. You can find articles at IEEE, but you will need to pay $15 per article minimum, or become an IEEE member. If you have a friend in some engineering university, you can ask for them to give you access.

An interesting thing I did find was for pre IOC F-35 : http://www.f-16.net/forum/viewtopic.php?t=27539

The chips aren't rated for operation above 55C, and aren't rated for storage above 85C. They are also rated for 20G.

Actually i never said that laptop or computer chips will get damaged because it is 22nm or lower. I am only saying that the transistors get degraded and damaged. Being 22nm or higher, all will get damaged one day or another. I was only referencing to this process, nothing more.
Yes all will get damaged. The rate is still too low to be an issue. Otherwise we would be seeing a whole lot of failures already from Intel's 14nm CPUs and TSMC's 10nm CPUs.


I must admit that finding exact reference in internet is extremely hard for semiconductor properties. I have been searching for temperature vs node size but I am unable to get any research paper at all. Considering that only 2 countries - US (Japan, Korea, Taiwan are USA vassal state controlled by USA military) & China have the ability to make and test semiconductor of lower nodes, I am not surprised. So, I won't make that claim anymore as I am unable to ind proper R&D paper that either roves or disproves my point.

Nevertheless, as I said above, the architecture is what matters the most in computation where the power availability is not limiting factor. Simply keeping parallel processors will solve all the problem that might be caused by reduced computation power. So, as long as India has good architecture, it should be good enough practically.
And again I will say that processor architecture is dependant on node size. For e.g. sometimes you will see gains just due to smaller nodes. You can get detailed articles on this from old anandtech CPU reviews. You can read up on their articles for Apple's A series CPUs.

And another thing is that smaller nodes allow for smaller chips to fit in the same slot. So it will mean more processing power for same size. Also note that a lot of computing in AR/VR systems is done by power hungry GPUs, which in turn need fast CPUs to feed them instructions. I wouldn't be surprised if the F-35 has a modified GPGPU for its main DAS processing system.
 
Last edited:

Advaidhya Tiwari

Senior Member
Joined
Aug 2, 2018
Messages
1,579
Likes
1,443
Yes I cited a bad source. You can however see the Raytheon press release which confirms the move to the Raytheon CIP instead. The Raytheon CIP is a PowerPC 970 derivative. We can tell based on its specification : It has a rating of 10.5GFLOPS. source : https://www.globalsecurity.org/military/systems/aircraft/f-22-avionics.htm

So based on this, we can infer that it is a PowerPC derivative. 10.5 GFLOPS is massive. For reference, newer integrated graphics achieved TFLOPS only in 2014-15. So it took 10 years to achieve 10x improvement.

Other sources of interest :

https://forums.anandtech.com/threads/does-nasa-still-use-386s.2355160/page-2

http://datasheet.digchip.com/205/205-00367-0-970.pdf (PowerPC 970 spec, it notes a much lower GFLOPS of about 9.7, but I suspect this was overcome with late variants of the 970)
No confirmation of powerPC G5. It is a powerPC derivative as USA uses that for military reulalrly. But as I said, usage of it in 2003 shows that it is at best 180nm and most probably lower. 180nm procesor of intel has had clock speed of 2 GHz. So, having 10.5 Gflop is a possibility. Also, you can't rule out parallel computing. Why can't F22 have parallel processors? In no way this shows anything about the exact processor or node size

Read the articles I cited. For reference here:

https://www.livefistdefence.com/201...of-frances-e4-billion-india-offsets-plan.html

It lists the companies which get offsets, and list of joint ventures being formed.

https://www.thehindu.com/news/natio...-agreements/article26775545.ece?homepage=true

This article spreads the same old "Rafale scam" BS, but if you scroll down, you will see a table, with numbers listing the percentage of offsets that will be invested each year, by each company (Dassault, Thales, MBDA, Scenma)
The Hindu offset dates appear reliable but livefist appears dubious. He says that he based it on some slides! So, the fact that offsets have to be given later on contributes to my idea that Rafale has to be modified.

And processor architecture is a function of the node it is on. You can't make a reasonable 22nm CPU on a 180nm node. Node is important in architecture design.
Absolutely, architecture is dependent on node size. But even there, architecture can be optimised for the role it needs to be used for. So, an intel 180nm PC processor will be way different from the one used in a server having same 180nm node. It will be even different for military use.

An interesting thing I did find was for pre IOC F-35 : http://www.f-16.net/forum/viewtopic.php?t=27539

The chips aren't rated for operation above 55C, and aren't rated for storage above 85C. They are also rated for 20G.
This is indeed interesting. It also clearly shows that there is parallel computing as for every subdivision, there is mention of 2/3 processors of same type.

Yes all will get damaged. The rate is still too low to be an issue. Otherwise we would be seeing a whole lot of failures already from Intel's 14nm CPUs and TSMC's 10nm CPUs.
There is an interesting article I read that states that the precision of the process of making computers have increased due to large scale manufacturing and experience gained and also due to increase in supercomputation and software which control the manufacturing precisely. So, the 180nm made in 2001 and 180nm made in 2019 will have big quality difference due to increased sophistication of machines & software! Also, for civilian use, the failure rate will be negligible (except in RAM and flashdrives)

And again I will say that processor architecture is dependant on node size. For e.g. sometimes you will see gains just due to smaller nodes. You can get detailed articles on this from old anandtech CPU reviews. You can read up on their articles for Apple's A series CPUs.

And another thing is that smaller nodes allow for smaller chips to fit in the same slot. So it will mean more processing power for same size. Also note that a lot of computing in AR/VR systems is done by power hungry GPUs, which in turn need fast CPUs to feed them instructions. I wouldn't be surprised if the F-35 has a modified GPGPU for its main DAS processing system.
The F35 information you gave above says that F35 uses a separate graphics processor (actually 2 of them) to render grahics and video.

About size, the size of a processor is 1 square inch. This is negligible. A simple laptop can have multiple GPU in that thin space. Even a small box CPU can have multiple CPU in desktop. So, it makes no sense to worry about size or slot number in a big equipment like a plane. The F14, for example, generated 75kvA per engine (not sure if it was older Tf30 or newer F100) which was more than enough to power the entire plane singe handedly (without 2nd engine power). Even if the power generated is just 30-40kW in a plane, that will be enough to power it completely. So, how does the size or power consumption of a Chipset in a jet even matter? 1 Kw more power due to 15-20 older chipsets is still nothing. So, why bother abut size of slots or power consumption in such cases?
 

gryphus-scarface

Regular Member
Joined
Apr 20, 2019
Messages
148
Likes
123
Country flag
No confirmation of powerPC G5. It is a powerPC derivative as USA uses that for military reulalrly. But as I said, usage of it in 2003 shows that it is at best 180nm and most probably lower. 180nm procesor of intel has had clock speed of 2 GHz. So, having 10.5 Gflop is a possibility. Also, you can't rule out parallel computing. Why can't F22 have parallel processors? In no way this shows anything about the exact processor or node size
The need to upgrade the CIP for the F-22 was realised by 2001. Up until then the F-22s used the same systems used in the YF-22, which flew way back in 1991. source : https://www.militaryaerospace.com/a...electronics-but-plan-for-future-upgrades.html

These were then replaced by newer Raytheon CIPs roughly around 08/08/2003. Source : https://www.militaryaerospace.com/a...electronics-but-plan-for-future-upgrades.html

So the idea that they would replace it with outdated 90s CPUs is insane. They replaced it with something that was built by Hughes inc. and fitted, and sold by Raytheon. Source : https://www.globalsecurity.org/military/systems/aircraft/f-22-avionics.htm

The known specifications of this CIP is 10.5GFLOPS and 2.9 DMIPS. Both of these match the specifications of the PowerPC 970.
Source :

https://www.globalsecurity.org/military/systems/aircraft/f-22-avionics.htm

http://datasheet.digchip.com/205/205-00367-0-970.pdf

Even if you assume that they used an older CPU, it can't be older than something on 180nm. And 180nm was phased out in 2002/2003. Now since we know the upgrades were done in late 2003, please tell me how the F-22s CPU is many nodes behind, or in any way significantly outdated, at the time of new CIP installation?

As for your nonsense about Intel, these are PowerPC CPUs, not Intel x86 CPUs. If by parallel CPUs you mean multicore, then maybe they are, but as far as I can tell PowerPC didn't get multi core until 2001, with POWER 4. The G4 also has dual core option (debut in 2004). Other than this, there is nothing older with multicore as far as I can tell. https://www.theregister.co.uk/2004/08/18/dual-cores_detailed/ https://en.wikipedia.org/wiki/List_of_PowerPC_processors

If you mean multiple CPUs, then each CIP is one CPU, with each one capable of 10.5GFLOPs. If you mean each parallel instructions, then the AltiVec again wasn't introduced until the G4. In no way is the F-22 CPU ancient.

In fact, this source even suggests that a third CIP was installed, as processing power was later deemed insufficient.

https://www.forecastinternational.com/archive/disp_pdf.cfm?DACH_RECNO=942

The Hindu offset dates appear reliable but livefist appears dubious. He says that he based it on some slides! So, the fact that offsets have to be given later on contributes to my idea that Rafale has to be modified.
Give one source that says the rest of the money is being spent in unknown ways. i've given two sources that give details on spending.

Absolutely, architecture is dependent on node size. But even there, architecture can be optimised for the role it needs to be used for. So, an intel 180nm PC processor will be way different from the one used in a server having same 180nm node. It will be even different for military use.
What is your point here? They can't magically achieve higher performance with the same instruction set. Yes there will be tuning. The server class CPUs do better at multicore stuff, but will have lower clock speeds per core, to keep within TDP limits. They don't change the architecture itself for server grade CPUs if that's what you're implying.

There is an interesting article I read that states that the precision of the process of making computers have increased due to large scale manufacturing and experience gained and also due to increase in supercomputation and software which control the manufacturing precisely. So, the 180nm made in 2001 and 180nm made in 2019 will have big quality difference due to increased sophistication of machines & software! Also, for civilian use, the failure rate will be negligible (except in RAM and flashdrives)
This is with regards to rejection rate of chips, not the life the consumer can expect. And no, civilians don't see RAM failure. We see failure in HDDs because of limited write life, but the life of RAM is practically infinite. The life of SSDs is yet to be understood. We don't fully understand what causes them to die. The only confirmed thing is that they will lose data if they are left without power for a year or more.

https://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand . We can definitely get the required write life from SSDs if necessary. https://www.kingston.com/us/ssd/dwpd

Please stop about this nonsense that RAM or CPU reliability reduces with node size reduction. The technology involved is different. https://forums.anandtech.com/threads/why-doesnt-ram-wear-out-like-an-ssd.2211544/

Even giants like Google who write multiple PetaBytes per day don't see RAM failures, or CPU failures, but do see hard disk failures. So please leave this angle alone. Nothing will change with respect to node here.

The F35 information you gave above says that F35 uses a separate graphics processor (actually 2 of them) to render grahics and video.
Yes. It will have a separate GPU. It needs to do a lot of graphics processing. And to give the instructions to those GPUs you need a powerful CPU. So both need to be powerful.
About size, the size of a processor is 1 square inch. This is negligible. A simple laptop can have multiple GPU in that thin space. Even a small box CPU can have multiple CPU in desktop. So, it makes no sense to worry about size or slot number in a big equipment like a plane. The F14, for example, generated 75kvA per engine (not sure if it was older Tf30 or newer F100) which was more than enough to power the entire plane singe handedly (without 2nd engine power). Even if the power generated is just 30-40kW in a plane, that will be enough to power it completely. So, how does the size or power consumption of a Chipset in a jet even matter? 1 Kw more power due to 15-20 older chipsets is still nothing. So, why bother abut size of slots or power consumption in such cases?
Size is contained. Not so much in the F-22 but very much so in smaller planes like the Tejas. And again you fail to see the point. The smaller node allows for much larger transistor count, and thus more performance in the same space.

http://www.ausairpower.net/APA-Raptor.html

The electric power is generated to power more than just the CIP, it is used to power the hydraulics. Without power your plane is as good as gone. But yes, power isn't the constraint here, its size. Given a particular size, a smaller node allows for more transistors to fit. This is Moore's law. Its how we've been getting performance improvements all these years (along with architectural changes)
 

Advaidhya Tiwari

Senior Member
Joined
Aug 2, 2018
Messages
1,579
Likes
1,443
The need to upgrade the CIP for the F-22 was realised by 2001. Up until then the F-22s used the same systems used in the YF-22, which flew way back in 1991. source : https://www.militaryaerospace.com/a...electronics-but-plan-for-future-upgrades.html

These were then replaced by newer Raytheon CIPs roughly around 08/08/2003. Source : https://www.militaryaerospace.com/a...electronics-but-plan-for-future-upgrades.html

So the idea that they would replace it with outdated 90s CPUs is insane. They replaced it with something that was built by Hughes inc. and fitted, and sold by Raytheon. Source : https://www.globalsecurity.org/military/systems/aircraft/f-22-avionics.htm

The known specifications of this CIP is 10.5GFLOPS and 2.9 DMIPS. Both of these match the specifications of the PowerPC 970.
Source :

https://www.globalsecurity.org/military/systems/aircraft/f-22-avionics.htm

http://datasheet.digchip.com/205/205-00367-0-970.pdf

Even if you assume that they used an older CPU, it can't be older than something on 180nm. And 180nm was phased out in 2002/2003. Now since we know the upgrades were done in late 2003, please tell me how the F-22s CPU is many nodes behind, or in any way significantly outdated, at the time of new CIP installation?

As for your nonsense about Intel, these are PowerPC CPUs, not Intel x86 CPUs. If by parallel CPUs you mean multicore, then maybe they are, but as far as I can tell PowerPC didn't get multi core until 2001, with POWER 4. The G4 also has dual core option (debut in 2004). Other than this, there is nothing older with multicore as far as I can tell. https://www.theregister.co.uk/2004/08/18/dual-cores_detailed/ https://en.wikipedia.org/wiki/List_of_PowerPC_processors

If you mean multiple CPUs, then each CIP is one CPU, with each one capable of 10.5GFLOPs. If you mean each parallel instructions, then the AltiVec again wasn't introduced until the G4. In no way is the F-22 CPU ancient.

In fact, this source even suggests that a third CIP was installed, as processing power was later deemed insufficient.

https://www.forecastinternational.com/archive/disp_pdf.cfm?DACH_RECNO=942
I told you several times that the processor available in 2003 was 130nm only, not more than that. Since the order given was in 2003, it must have been under planning for at least 3 years as none places an order haphazardously. That means the chipset must be minimum from 2000 level or earlier. The difference between 2003 & 1999 is just 4 years. So, 1990s is not some ancient date with reference to 2003!

See this article from 2000 when it says the pentium 4 has 5.6Gflop:
https://www.geek.com/chips/intel-brands-the-pentium-4-564800/

The F22 processor can at best be 180nm or below and defnitely not better than 180nm.

Give one source that says the rest of the money is being spent in unknown ways. i've given two sources that give details on spending.
What is your point here? They can't magically achieve higher performance with the same instruction set. Yes there will be tuning. The server class CPUs do better at multicore stuff, but will have lower clock speeds per core, to keep within TDP limits. They don't change the architecture itself for server grade CPUs if that's what you're implying.
The architecture refers to the gate boolean logic. Yes, architecture matters the most in getting faster results and better performance. The architecture needed for processing signals will be completely different from what is needed to make general computation chips of computers.

This is with regards to rejection rate of chips, not the life the consumer can expect. And no, civilians don't see RAM failure. We see failure in HDDs because of limited write life, but the life of RAM is practically infinite. The life of SSDs is yet to be understood. We don't fully understand what causes them to die. The only confirmed thing is that they will lose data if they are left without power for a year or more.

https://www.anandtech.com/show/6459/samsung-ssd-840-testing-the-endurance-of-tlc-nand . We can definitely get the required write life from SSDs if necessary. https://www.kingston.com/us/ssd/dwpd

Please stop about this nonsense that RAM or CPU reliability reduces with node size reduction. The technology involved is different. https://forums.anandtech.com/threads/why-doesnt-ram-wear-out-like-an-ssd.2211544/

Even giants like Google who write multiple PetaBytes per day don't see RAM failures, or CPU failures, but do see hard disk failures. So please leave this angle alone. Nothing will change with respect to node here.
Look, I have had bunch of failures of RAM, SSD, pendrives & microSD. But no HDD/processor has failed yet.

I did not say that RAM or any other reliability reduced with node size. I am only giving examples as to how any chip can be damaged/degraded. Let us skip this discussion at all as it is becoming irrelevant in the context of military avionics.

Size is contained. Not so much in the F-22 but very much so in smaller planes like the Tejas. And again you fail to see the point. The smaller node allows for much larger transistor count, and thus more performance in the same space.

http://www.ausairpower.net/APA-Raptor.html

The electric power is generated to power more than just the CIP, it is used to power the hydraulics. Without power your plane is as good as gone. But yes, power isn't the constraint here, its size. Given a particular size, a smaller node allows for more transistors to fit. This is Moore's law. Its how we've been getting performance improvements all these years (along with architectural changes)
As I said, F14 engine produced 75kVA of power per engine. F16 produces 60kVA power. Yes, the flight controls generally consume large amount of power like 15kW. Then Radar consumes power of 4-6kW. But remaining electronics have plenty power left. Even the size of Tejas is 13+ metre and defintely, adding a few one-inch procesord will not need more space even if we consider the other accessories.

It is true that number of transistors double every generation but that does not mean that it is needed. Our performance enhancement in civilian life is about power efficiency and longer battery life. But this is not needed in a fighter aircraft as there is no dearth of power.
 

gryphus-scarface

Regular Member
Joined
Apr 20, 2019
Messages
148
Likes
123
Country flag
I told you several times that the processor available in 2003 was 130nm only, not more than that. Since the order given was in 2003, it must have been under planning for at least 3 years as none places an order haphazardously. That means the chipset must be minimum from 2000 level or earlier. The difference between 2003 & 1999 is just 4 years. So, 1990s is not some ancient date with reference to 2003!

See this article from 2000 when it says the pentium 4 has 5.6Gflop:
https://www.geek.com/chips/intel-brands-the-pentium-4-564800/

The F22 processor can at best be 180nm or below and defnitely not better than 180nm.
And now you have finally accepted that the F-22 flies with something that isn't ancient when compared to when it was achieving IOC/FOC? You were telling a while back that they intentionally use 1um and what not for these systems? That too because of heat and shorting? They use the latest available, with maybe a small delay at best.. And Each CIP offers 10.5GFLOPS. Pentium 4 is a different architecture from PowerPC, and completely irrelevant. Even then, 5.6GFLOPS is much less than 10.5GFLOPS for each CIP. The Tejas should ideally be using something that is more recent than 180nm, since IOC was achieved in 2013, it should use something like atleast 65nm. Unfortunately we have no fabs at this size, so any chips would have to be imported.


The architecture refers to the gate boolean logic. Yes, architecture matters the most in getting faster results and better performance. The architecture needed for processing signals will be completely different from what is needed to make general computation chips of computers.
I have no idea what you're talking about. You do realise that architectural changes today are achieving speed gains of 5%-10% right?

See also https://www.anandtech.com/show/10158/the-intel-xeon-e5-v4-review/3

We get far more gains by stuffing more cores than from architectural enhancements these days. This has been true since a good amount of time. And to stuff more cores, you need a smaller node size.

Look, I have had bunch of failures of RAM, SSD, pendrives & microSD. But no HDD/processor has failed yet.

I did not say that RAM or any other reliability reduced with node size. I am only giving examples as to how any chip can be damaged/degraded. Let us skip this discussion at all as it is becoming irrelevant in the context of military avionics.
Yes please.

As I said, F14 engine produced 75kVA of power per engine. F16 produces 60kVA power. Yes, the flight controls generally consume large amount of power like 15kW. Then Radar consumes power of 4-6kW. But remaining electronics have plenty power left. Even the size of Tejas is 13+ metre and defintely, adding a few one-inch procesord will not need more space even if we consider the other accessories.

It is true that number of transistors double every generation but that does not mean that it is needed. Our performance enhancement in civilian life is about power efficiency and longer battery life. But this is not needed in a fighter aircraft as there is no dearth of power.
Again, *performance* for the *same size*. Why is this hard to comprehend? You seem to think that we can just add chips willy nilly. There needs to be space for its peripherals, wiring, etc. It needs to be in such a way that it can be accessed easily for maintenance.
 

Advaidhya Tiwari

Senior Member
Joined
Aug 2, 2018
Messages
1,579
Likes
1,443
And now you have finally accepted that the F-22 flies with something that isn't ancient when compared to when it was achieving IOC/FOC? You were telling a while back that they intentionally use 1um and what not for these systems? That too because of heat and shorting? They use the latest available, with maybe a small delay at best.. And Each CIP offers 10.5GFLOPS. Pentium 4 is a different architecture from PowerPC, and completely irrelevant. Even then, 5.6GFLOPS is much less than 10.5GFLOPS for each CIP. The Tejas should ideally be using something that is more recent than 180nm, since IOC was achieved in 2013, it should use something like atleast 65nm. Unfortunately we have no fabs at this size, so any chips would have to be imported.
I never said 1um is used in fighter jets. 0.8-1um is used in missiles and I still say the same. Missiles are not very intelligent systems and don't need to think too much. They have just basic processors to get the job done.

As I said, the F22 is using chipset of 1990s only as it is impossible to use modern chips so quickly. The chips are first released for civilian use and feedback of the problems is obtained. Only after the technology has stabilised, it will be used for military purpose. So, F22 is using at best 180nm but most probably 350nm

Tejas does not need to indulge in rat race and get some 65nm processor. 180nm should be enough with right specialisation and architecture.

I have no idea what you're talking about. You do realise that architectural changes today are achieving speed gains of 5%-10% right?

See also https://www.anandtech.com/show/10158/the-intel-xeon-e5-v4-review/3

We get far more gains by stuffing more cores than from architectural enhancements these days. This has been true since a good amount of time. And to stuff more cores, you need a smaller node size.
Architecture is what matters most when specialisation is required. You are speaking of generic usage where just raw speed matters more but in specific usage, architecture matters a lot. A 180nm specialised chip will perform better than 45nm chip which is not specialised.

Again, *performance* for the *same size*. Why is this hard to comprehend? You seem to think that we can just add chips willy nilly. There needs to be space for its peripherals, wiring, etc. It needs to be in such a way that it can be accessed easily for maintenance.
If you have seen how the modular chipset tray works, then you won't be talking like this. It is not necessary to have big wirings etc. Keeping multiple processor is rather straightforward and simle. It will take as much extra space as a wifi router
 

gryphus-scarface

Regular Member
Joined
Apr 20, 2019
Messages
148
Likes
123
Country flag
I never said 1um is used in fighter jets. 0.8-1um is used in missiles and I still say the same. Missiles are not very intelligent systems and don't need to think too much. They have just basic processors to get the job done.

As I said, the F22 is using chipset of 1990s only as it is impossible to use modern chips so quickly. The chips are first released for civilian use and feedback of the problems is obtained. Only after the technology has stabilised, it will be used for military purpose. So, F22 is using at best 180nm but most probably 350nm
180nm is in 2000. So enough of your bullshit. Get to the point. 180nm is pretty damn recent when compared to F-22's FOC. You claim some gibberish and don't cite any sources. I've demonstrated that the F-22 used cutting edge tech. Similarly the F-35. Only you keep claiming that faster CPUs aren't needed. Then you say that smaller nodes will get shorted. Then they will melt. Enough bullshit. Provide one source that states planes fly with very outdated tech, or shut up. I'm sick of this nonsense from you. 3 year old design is by no means outdated, that too given the performance it gave. 10.5 GFLOPS is massive for back then. Show me one source stating a PowerPC chip could achieve 10.5 GFLOPS with a 350nm node.

Tejas does not need to indulge in rat race and get some 65nm processor. 180nm should be enough with right specialisation and architecture.
Because it has no need for compute power. It has no need to provide headroom for future upgrades. It has no need for any of that.

Architecture is what matters most when specialisation is required. You are speaking of generic usage where just raw speed matters more but in specific usage, architecture matters a lot. A 180nm specialised chip will perform better than 45nm chip which is not specialised.
And what on earth is specialisation now? Do you even know what you're talking about or do you keep throwing new works to make keep your argument? These are generic CPUs made based on the PowerPC architecture. FPGAs are not something you would fit into an F-22 with hope for future upgrades. You stick in gereric CPUs with high compute power so that they are ready for future requirements.


If you have seen how the modular chipset tray works, then you won't be talking like this. It is not necessary to have big wirings etc. Keeping multiple processor is rather straightforward and simle. It will take as much extra space as a wifi router
What is a modular chipset now? Have you ever opened a Desktop cabinet? There is more than just the CPU. It needs power supply, cooling, RAM, connection to the other components, etc. They need more space than 1 inch^2. It will take far more space than a WiFi router. Even if you go with a dual CPU config like some servers, that is more complicated than you think. I'm starting to think you just blabber the first thing that comes to your mind. You clearly have no idea what you're talking about.

And to think that a multi cpu config is simple? I think many companies would be jumping to hire you as you seem to know a lot about how easy a multi CPU config is.
 

Advaidhya Tiwari

Senior Member
Joined
Aug 2, 2018
Messages
1,579
Likes
1,443
180nm is in 2000. So enough of your bullshit. Get to the point. 180nm is pretty damn recent when compared to F-22's FOC. You claim some gibberish and don't cite any sources. I've demonstrated that the F-22 used cutting edge tech. Similarly the F-35. Only you keep claiming that faster CPUs aren't needed. Then you say that smaller nodes will get shorted. Then they will melt. Enough bullshit. Provide one source that states planes fly with very outdated tech, or shut up. I'm sick of this nonsense from you. 3 year old design is by no means outdated, that too given the performance it gave. 10.5 GFLOPS is massive for back then. Show me one source stating a PowerPC chip could achieve 10.5 GFLOPS with a 350nm node.
Yes 180nm is 2000. But you were saying better than 180n. I am saying that at best it is 180nm and definitely not better.

Considering that F22 is used even today and is considered as modern, it is 20 year old design.

And what on earth is specialisation now? Do you even know what you're talking about or do you keep throwing new works to make keep your argument? These are generic CPUs made based on the PowerPC architecture. FPGAs are not something you would fit into an F-22 with hope for future upgrades. You stick in gereric CPUs with high compute power so that they are ready for future requirements.
Specialisation in architecture is about the logic in which the processor works which is suited for certain end goals.

Yeah, we don't keep CPU in jets with hope of upgrades. So, your idea of future upgrades is unreal

What is a modular chipset now? Have you ever opened a Desktop cabinet? There is more than just the CPU. It needs power supply, cooling, RAM, connection to the other components, etc. They need more space than 1 inch^2. It will take far more space than a WiFi router. Even if you go with a dual CPU config like some servers, that is more complicated than you think. I'm starting to think you just blabber the first thing that comes to your mind. You clearly have no idea what you're talking about.

And to think that a multi cpu config is simple? I think many companies would be jumping to hire you as you seem to know a lot about how easy a multi CPU config is.
Yeah, I have seen a server too. Just see how multiple processors are kept in parallel. That is modularity. It needs as much space as wifi router.


No one says it is easy to make dual processor. It requires loads of programming and designing of architecture to handle parallel computing efficiently. But once there is required design it is not hard.

Just go to some server room and see how they keep serves stacked with modular processor units
 

gryphus-scarface

Regular Member
Joined
Apr 20, 2019
Messages
148
Likes
123
Country flag
Yes 180nm is 2000. But you were saying better than 180n. I am saying that at best it is 180nm and definitely not better.

Considering that F22 is used even today and is considered as modern, it is 20 year old design.
So? The point is that they used the latest when designed. You seem to be genuinely dense. The F-22 achieved IOC by 2005. So using a CPU from 2002 is nowhere near as outdated as you think.

Specialisation in architecture is about the logic in which the processor works which is suited for certain end goals.
More generic BS. Please understand what each architecture is first. There is something like pipeline, instruction width, etc. Not some "logic with which the processor works". Architecture refers to the deisgn of the chipset. What is its pipeline length? Does it have a branch predictor, etc.

Yeah, we don't keep CPU in jets with hope of upgrades. So, your idea of future upgrades is unreal
And now you're certified delusional. Read any of info on the F-22. All of them say the same thing. It uses 70% of each CIP's compute performance, with a total of 60% compute performance left as a buffer for future upgrades. Additionally, a third slot has been left for a third CIP for future upgrades. This is very much leaving compute power for upgrades.

Yeah, I have seen a server too. Just see how multiple processors are kept in parallel. That is modularity. It needs as much space as wifi router.

Ah so you want to fit a dual socket server motherboard and you think it is as easy as that? What about the cooling for each CPU? And what about the parallelism needs? Multi core offers very different parallelisation capabilities from Multi CPU. "As much space as a WiFi router" You clearly don't know what you're talking about.

No one says it is easy to make dual processor. It requires loads of programming and designing of architecture to handle parallel computing efficiently. But once there is required design it is not hard.

Just go to some server room and see how they keep serves stacked with modular processor units
We prefer multicore CPUs for a reason. The multiCPU configuration is irrelevant as it shares resources between CPUs, so you have doubled compute capability, but not much else.

When I say I want another chip, I don't need just its compute power, I need the I/O, the RAM, etc.
 

Advaidhya Tiwari

Senior Member
Joined
Aug 2, 2018
Messages
1,579
Likes
1,443
More generic BS. Please understand what each architecture is first. There is something like pipeline, instruction width, etc. Not some "logic with which the processor works". Architecture refers to the deisgn of the chipset. What is its pipeline length? Does it have a branch predictor, etc.
I understand it. It is just that I don't want to explain architecture in this forum. Then it will be a semiconductor classroom.

And now you're certified delusional. Read any of info on the F-22. All of them say the same thing. It uses 70% of each CIP's compute performance, with a total of 60% compute performance left as a buffer for future upgrades. Additionally, a third slot has been left for a third CIP for future upgrades. This is very much leaving compute power for upgrades.
Every military plane keeps buffers. None use 100% of processing power. It is not for upgrades but contingency. Also, upgrades can come in the form of advanced equipment like EW, GaN AESA etc being developed, but upgrades are not like upgrade from Windows 8 to Windows 10. Enhancement is not upgrade. The specific role which it is meant to do will be exactly same. If you want to add things like HMDS to a plane without HMDS, then you can't just get a fancy helmet and plug it into US. You will have to change a few display processors along with it and add processors to track eye movement to accommodate the change.

Ah so you want to fit a dual socket server motherboard and you think it is as easy as that? What about the cooling for each CPU? And what about the parallelism needs? Multi core offers very different parallelisation capabilities from Multi CPU. "As much space as a WiFi router" You clearly don't know what you're talking about
No, I don't want to fit a dual socket. I just wanted to show an example of how small it can be. It is an example as I did not want to search for long periods to find the exact video and hence uploades whatever I got first. My objective was just to show that the space requirement is not high, at least tot for a plane which is 13metere+ in length and weighs 6-7ton when empty

We prefer multicore CPUs for a reason. The multiCPU configuration is irrelevant as it shares resources between CPUs, so you have doubled compute capability, but not much else.

When I say I want another chip, I don't need just its compute power, I need the I/O, the RAM, etc
Multicore CPU is to ensure that entire processor is evenly used to optimal limits. This happens because as number of transistors increase, single core will be unnecessarily left unused or the transistors at the back end of the input will get used in limited amount. So, we split the processor in cores.

I am speaking of parallel computing. Servers work on this method and it works quite well. I don't understand your obsession on the cores and size. Doubling computing capacity is useless? Why? That means out of 1000 data points received by say, the AESA radar, 500 can be computed by one core and the otehr 500 in another core. That will increase speed of processing by 100%
 

gryphus-scarface

Regular Member
Joined
Apr 20, 2019
Messages
148
Likes
123
Country flag
I understand it. It is just that I don't want to explain architecture in this forum. Then it will be a semiconductor classroom.


Every military plane keeps buffers. None use 100% of processing power. It is not for upgrades but contingency. Also, upgrades can come in the form of advanced equipment like EW, GaN AESA etc being developed, but upgrades are not like upgrade from Windows 8 to Windows 10. Enhancement is not upgrade. The specific role which it is meant to do will be exactly same. If you want to add things like HMDS to a plane without HMDS, then you can't just get a fancy helmet and plug it into US. You will have to change a few display processors along with it and add processors to track eye movement to accommodate the change.
Oh so you think thats how compute power works? You just leave some extra just in case you need it in war situation? This isn't an engine. You leave some headroom just in case, and some headroom for upgrades. It makes future upgrades easier and cheaper. A software upgrade is always cheaper than a software + hardware upgrade.

Further, if you add a HMD, where do you think it will get its info from? From the system that's done its sensor fusion. And the main CIP will keep track of this and hand over the data as needed. So no, the main CIP does need to know about all the components and their functions.
No, I don't want to fit a dual socket. I just wanted to show an example of how small it can be. It is an example as I did not want to search for long periods to find the exact video and hence uploades whatever I got first. My objective was just to show that the space requirement is not high, at least tot for a plane which is 13metere+ in length and weighs 6-7ton when empty
Again the BS comes out. The F-22 has space for 3 CIPs, of which 2 are fitted. You seem to think that you can make this 2x compute resources without any major changes.

Multicore CPU is to ensure that entire processor is evenly used to optimal limits. This happens because as number of transistors increase, single core will be unnecessarily left unused or the transistors at the back end of the input will get used in limited amount. So, we split the processor in cores.
What does this even mean? A multi core CPU is simply a CPU with more than one cores. Each core is like a CPU in itself (oversimplified). It is not for some sort of "optimal usage" whatever that is.
I am speaking of parallel computing. Servers work on this method and it works quite well. I don't understand your obsession on the cores and size. Doubling computing capacity is useless? Why? That means out of 1000 data points received by say, the AESA radar, 500 can be computed by one core and the otehr 500 in another core. That will increase speed of processing by 100%
I don't understand your inability to comprehend the difference between a multi core system, and a multi socketed system. And the fact that you think parallel processing can be applied anywhere shows your lack of an understanding.
Sure that works for an AESA radar, but what about for other tasks, where single threaded performance is more important? And for situations where multi threading is more optimal, a multi core design scales more efficiently (thanks to shared L3 cache on most systems which is much faster than accessing RAM). And shifting down nodes is one of the main sources of performance benefits.

This discussion is going nowhere. I'm done with this.
 

Advaidhya Tiwari

Senior Member
Joined
Aug 2, 2018
Messages
1,579
Likes
1,443
This is the last of replies I will be giving. I see no point in simply insisting on something

Oh so you think thats how compute power works? You just leave some extra just in case you need it in war situation? This isn't an engine. You leave some headroom just in case, and some headroom for upgrades. It makes future upgrades easier and cheaper. A software upgrade is always cheaper than a software + hardware upgrade.
I have mentioned it before that every architecture has some redundancy built in within the core so that when some transistors fail, it will direct it to another set of transistors and the processor will not fail. That is why aleays headroom is kept everywhere! I can't write essays for every single query. You have to think first

Further, if you add a HMD, where do you think it will get its info from? From the system that's done its sensor fusion. And the main CIP will keep track of this and hand over the data as needed. So no, the main CIP does need to know about all the components and their functions.
For god's sake, I am only saying that any major addition/change of equipment will need change in processing too unless it is just projecting old data in new form. I don't understand what you are saying here.

Again the BS comes out. The F-22 has space for 3 CIPs, of which 2 are fitted. You seem to think that you can make this 2x compute resources without any major changes.
By adding another chip we get 1.5 times computation power in F22, 2x is when you compare 2 chip vs 1, not 3 vs 2!! What is the point?

What does this even mean? A multi core CPU is simply a CPU with more than one cores. Each core is like a CPU in itself (oversimplified). It is not for some sort of "optimal usage" whatever that is.
Each core is not like separate CPU

I don't understand your inability to comprehend the difference between a multi core system, and a multi socketed system. And the fact that you think parallel processing can be applied anywhere shows your lack of an understanding.
Sure that works for an AESA radar, but what about for other tasks, where single threaded performance is more important? And for situations where multi threading is more optimal, a multi core design scales more efficiently (thanks to shared L3 cache on most systems which is much faster than accessing RAM). And shifting down nodes is one of the main sources of performance benefits.
If you can use dual core, then you can also use dual processor provided the ones who have written the codes have given it due consideration. If any operation can be split in core, then it can be split in processors too. Multicore is faster than using RAM due to L3 cache but its scope can't be scaled as much as multiprocessor can. But multiprocessor is extremely scalable. The most important processing in jets is radar, RWR, SPJ etc which need scaling.
 

Chinmoy

Senior Member
Joined
Aug 12, 2015
Messages
8,768
Likes
22,803
Country flag
This is a nice way of twisting facts.

2.5 pilots per plane against 1.5 pilots. He has taken into consideration the human endurance factor, but not the machine endurance factor. What would be the mean time in between sorties? What would be the rate of supply and spare available. None of this has been considered. A plane is as good as its pilot and a Pilot would be as good as the plane.

Second he complained of firing range in western sector. I wonder where IAF drops all those bombs in its excercise? What large calibre bomb is he talking of? We tested Nukes in Pokhran and the same Pokhran now became small for IAF to test its payload as per Snehesh.

We have a firing range at Dibang in eastern sector, but most probably he has not taken this into account when talking of high altitude firing range. He has most probably also forgot the range in HP which was recently in news for locals advocating against it.

On top of all this, he has forgot the fact that using the jugaad tech we had been able to bomb the shit out of Pakis in Kargil. He had side stepped from the technology and just pinned his thought on dumb ammo bombing. All other are just the old rantings and he had just twisted facts to suit him or to say The Print.
 

gryphus-scarface

Regular Member
Joined
Apr 20, 2019
Messages
148
Likes
123
Country flag
This is a nice way of twisting facts.

2.5 pilots per plane against 1.5 pilots. He has taken into consideration the human endurance factor, but not the machine endurance factor. What would be the mean time in between sorties? What would be the rate of supply and spare available. None of this has been considered. A plane is as good as its pilot and a Pilot would be as good as the plane.

Second he complained of firing range in western sector. I wonder where IAF drops all those bombs in its excercise? What large calibre bomb is he talking of? We tested Nukes in Pokhran and the same Pokhran now became small for IAF to test its payload as per Snehesh.

We have a firing range at Dibang in eastern sector, but most probably he has not taken this into account when talking of high altitude firing range. He has most probably also forgot the range in HP which was recently in news for locals advocating against it.

On top of all this, he has forgot the fact that using the jugaad tech we had been able to bomb the shit out of Pakis in Kargil. He had side stepped from the technology and just pinned his thought on dumb ammo bombing. All other are just the old rantings and he had just twisted facts to suit him or to say The Print.
Also he says that "no matter how much simulation we do, it will always be different from the actual thing". While true, we currently have a far more intense training regime than most peace time forces. The IAF training regime has been criticised for not using simulators enough, in place of actual flying. Yet this clown claims that the IAF is doing too much of simulation training.
 

Chinmoy

Senior Member
Joined
Aug 12, 2015
Messages
8,768
Likes
22,803
Country flag
Also he says that "no matter how much simulation we do, it will always be different from the actual thing". While true, we currently have a far more intense training regime than most peace time forces. The IAF training regime has been criticised for not using simulators enough, in place of actual flying. Yet this clown claims that the IAF is doing too much of simulation training.
Things are simple with people like him. The Print had contacted him, payed him a sum and the synopsis to write. He is just elaborating The Print synopsis to suit all and justify the payment made to him.
 

Latest Replies

Global Defence

New threads

Articles

Top