|
|
Subject:
Predicting Program behavior
Category: Science > Math Asked by: theorycs-ga List Price: $3.00 |
Posted:
30 Oct 2002 01:48 PST
Expires: 29 Nov 2002 01:48 PST Question ID: 92961 |
|
Subject:
Re: Predicting Program behavior
Answered By: mathtalk-ga on 31 Oct 2002 21:13 PST |
Hi, theorycs-ga: Thanks for the encouragement... Let me elaborate on a topic I mentioned in commenting on your other post. A very practical sense that one wants to "predict" program behavior at the CPU instruction level is when a pipelined architecture is introduced. Here's a site that discusses the lengthening of the pipeline in Intel's Pentium 4: [Hyper-pipelined Technology, about halfway down the page] http://www.intel.com/design/Pentium4/prodbref/ Why are pipelines getting longer? It is the easiest way to reduce effective instruction timings (by overlapping their executions). No, nine women still cannot have a baby in one month, but ... they can sequentially deliver one baby per month over nine months. It just requires lead time to fill the pipeline! Here's a comparison of the Pentium 4 pipeline (20 stages) versus the Pentium 3 (10 stages) and AMD's Athlon (11 stages): http://www17.tomshardware.com/cpu/00q4/001120/p4-09.html Making each stage simpler means fewer transistors per stage & allows for clock speeds to continue rising as well (the limiting factor being power dissipation). To go with the "hyper-pipelined" hardware, Intel claims a "trace cache" branch prediction unit that can eliminate one-third of the wrong guesses that would be made on a Pentium 3 processor. When a wrong guess is made about the instruction branch to be taken, then the pipeline filled to that point must be flushed. One approach in earlier designs was that one might use separate instruction execution pipelines for _both_ possible branches, and retain only the correct one in the end. Here's an "early" cite for alternatives to "reducing misprediction branching penalties": http://www.computer.org/computer/co1988/r7047abs.htm But as pipelines grew longer, this became an infeasible waste of silicon. At the same time the cost for a mistake in prediction became more expensive (as more tentative computation has to be discarded from the pipeline). This places tremendous emphasis on guessing correctly, improving the percentage accuracy to "rarely wrong". Here's a discussion from work 5 and 10 years ago on state of the art prediction algorithms: http://www.cs.pdx.edu/~herb/cs201f02/branch.htm Notice that "static" and many "dynamic" prediction schemes with 75-80% accuracy are considered too inefficient for practical use. A two-level adaptive branch prediction scheme by Yeh and Pratt is touted as possibly providing 95% accuracy, at least on one sort of benchmark. Intel's adoption of two-level adaptive branching schemes with the Pentium Pro and Pentium II is discussed here: http://x86.ddj.com/articles/branch/branchprediction.htm With the Pentium 4 Intel claims to have reached this 95% regime. For a look at the design issues and pipeline possibilities going forward, here's whitepaper by Intel architects Eric Sprangle and Doug Carmean: http://systems.cs.colorado.edu/ISCA2002/FinalPapers/Deep%20Pipes.pdf regards, mathtalk-ga |
|
Subject:
Re: Predicting Program behavior
From: davidmaymudes-ga on 30 Oct 2002 08:29 PST |
without further information, I would have to say the answer is "no, you usually can't predict anything useful". unless there are additional constraints or structure to the problem you're posing, it's pretty random which program will be the one that gets access to a resource under contention. |
Subject:
Re: Predicting Program behavior
From: duncan2-ga on 30 Oct 2002 10:37 PST |
This is a highly speculative question and one which probably won't get you a very satisfactory answer. There are simply too many variables involved. First, it's OS dependent. Are the processes dependent on each other? Is there a scheduler that manages the threads and prevents race conditions and resource starvation? Is this in a cooperative or preemptive multitasking environment? Depending on the architecture of the OS and CPU hardware, the answer may well vary. And if the processes are interdependent then prediction becomes even less feasible, in particular for error conditions. (For a simple example, consider that if Microsoft Word crashes in Windows 3.1, it's likely the entire OS will crash. In Windows 9x, with different memory model, one application crash is much less likely to affect other running programs). Add to the mix that most commercial software has numerous bugs and memory leaks and your mathematical model will become strictly theoretical. Chaos theory might provide some models, but unless you simplify your scenario you're unlikely to produce anything very realistic. |
Subject:
Re: Predicting Program behavior
From: mathtalk-ga on 30 Oct 2002 19:48 PST |
Back in the ancient days of mainframes, Burroughs Corp. developed some pragmatic ideas about scheduling for concurrent programs/workloads. The main idea was called the "working set" needed for a program's execution, meaning the typical memory, IO, and CPU time consumed by such an execution. The behavior of a particular instance of a program (relative to these normative values) was used along with priority queues by the "working set sheriff" supertask to determine what programs were swapped in and out of memory. What made this relatively simple analysis satisfactory back then was the "batch" nature of the computer workload. People submitted "jobs" and the goal was mostly to get them to run to completion. The working-set sheriff made sure that jobs were "swapped into core" only when they were ready to run with resources likely to make substantial progress toward finishing. "Thrashing" of jobs in and out of core was more or less eliminated by this mathematically posed strategy. Today's computer workloads don't really fit this model well. Much of what runs on my desktop are programs that are intended to run indefinitely, so the goal of trying to complete as many tasks/programs with limited resources over a planning horizon (timeframe) is not realistic. I'm not aware of anyone having tried to extend the working-set attributes of memory, IO, and CPU time consumption to a larger collection of measurements that might be able to handle the new realities more successfully. Instead it seems that we are in a regime where it is simpler to add more memory and CPU speed and hope for the best without thinking carefully about the resources. I'd love to learn more about this if you come up with something. best wishes, mathtalk-ga |
Subject:
Re: Predicting Program behavior
From: theorycs-ga on 30 Oct 2002 21:09 PST |
mathtalk-ga, I really appreciate your comments they are the most sensible, it seems you are the only person who (at least) understood what I wanted to ask. I will be forwarding my article to you after its published. In the meantime if you can just write something extra please do so it dosent matter if its not much if you can give me some more info it¡¦ll help and off course don¡¦t give it as a comment just post it as an answere the three dollar isn¡¦t much º but five stars in your rating wont hurt. Any other researcher who thinks he can provide a satisfactory answer please feel free to do so I will try and give the maximum points plus a good tip if I am satisfied. if you think the question is too general please feel free to consider some particular OS and some particular restrains etc whichever. |
If you feel that you have found inappropriate content, please let us know by emailing us at answers-support@google.com with the question ID listed above. Thank you. |
Search Google Answers for |
Google Home - Answers FAQ - Terms of Service - Privacy Policy |