Login    Forum    Search    FAQ     Radio

Board index » Computer Discussions » Latest Computer Technology News

Post new topic Reply to topic  [ 1 post ] 
Author Message
 Post Posted: Thu Sep 30, 2010 1:50 pm 
User avatar

Joined: Fri Apr 09, 2010 11:07 pm
Posts: 645
Location: Boonies
Adding cores to the CPU has become the general recipe to ensure performance improvements in modern computers, even if we have heard before than the IT industry will face efficiency problems beyond 16 cores. New research published by MIT now suggests that the industry will be running into a soft wall when 48 cores are reached and new operating system architectures may be required.

Intel Core i7 Die

The number of cores in modern CPUs has grown much slower than we initially anticipated. The first mainstream quad-core processor (Intel Kentsfield), followed just 18 months after the release of the first dual-core processor (Intel Smithfield) in May 2005 and haven’t much since then. Six physical cores (and 12 threads) is the top of the range at Intel, while AMD has 12 cores available right now, but is talking about up to 16 cores in the not too distant future.

Intel has said in the past that, beyond 16 cores, it appears that much of the performance gain efficiency from the pure addition of cores may be gone and improving software that takes advantages of these cores may become much more important. That appears to be still true and we are actually seeing both processor makers buying themselves some time with the strategy of integrating GPUs into the processor package, which will provide a path to cost savings, but also a way to increase overall application speed as far as floating-point performance is concerned.

However, the question how many traditional CPU cores really make sense in a “many-core” environment remains. Dozens? Hundreds? Thousands?

MIT’s Frans Kaashoek has provided some clues and said that current operating systems, especially Linux can scale to take advantage of multiple cores with minor modifications to the underlying OS code. He and his team simulated a 48-core chip through an 8 x 6 core setup and monitored the performance change when cores were activated one by one. “At some point, the addition of extra cores began slowing the system down rather than speeding it up.” The explanation is that multiple cores often do redundant work and process the same data, which needs to be kept in the chip’s memory for that time. As long as the memory is used, it is not available for other tasks and a performance bottleneck is the result: When the number of cores increases, tasks that depend on the same data get split up into smaller and smaller chunks.

“The MIT researchers found that the separate cores were spending so much time ratcheting the [memory] counter up and down that they weren’t getting nearly enough work done,” the report states. However, the researchers also found that “slightly rewriting the Linux code so that each core kept a local count, which was only occasionally synchronized with those of the other cores, greatly improved the system’s overall performance.”

“The fact that that is the major scalability problem suggests that a lot of things already have been fixed. You could imagine much more important things to be problems, and they’re not. You’re down to simple reference counts.” Kaashoek said. “Our claim is not that our fixes are the ones that are going to make Linux more scalable,” Kaashoek says. “The Linux community is completely capable of solving these problems, and they will solve them. That’s our hypothesis. In fact, we don’t have to do the work. They’ll do it.”

But is there a limit? Remzi Arpaci-Dusseau, a professor of computer science at the University of Wisconsin, thinks so: “The big question in the community is, as the number of cores on a processor goes up, will we have to completely rethink how we build operating systems.”

According to Arpaci-Dusseau, if the number of cores on a chip gets “significantly beyond 48,” new architectures and operating systems may become necessary. However, that may not be the case within the next 5 to 8 years. He noted that finding the problems is the hard part. What that hints at for the rest of the community is that building techniques — whether they’re software techniques or hardware techniques or both — that help to identify these problems is going to be a rich new area as we go off into this multicore world.”

Admit Nothing, Deny EVERYTHING, DEMAND Pr00f!

Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 1 post ] 

Board index » Computer Discussions » Latest Computer Technology News

Who is online

Users browsing this forum: No registered users and 2 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to: