View Question
 Question
 Subject: Software Engineering and Maths Category: Computers > Software Asked by: tpearlv-ga List Price: \$4.00 Posted: 12 May 2002 05:13 PDT Expires: 19 May 2002 05:13 PDT Question ID: 15320
 ```what kind of mathematics is used in Software Engineering? and how? is there any good book on Software Engineering Mathematics?```
 Subject: Re: Software Engineering and Maths Answered By: joey-ga on 12 May 2002 11:28 PDT Rated:
 ```Software engineers, depending on their specialty, may use any number of different mathematical techniques. Those trying to fit a project into a schedule while also fighting for the final product's efficiency use significant logarithmic and exponential-power algebra to calculate what's known as a software's "Big O". "Big O" (never really sure what it stands for) is essentially how ineffecient the software is, how many machine cycles it takes, etc. to accomplish a programming construct (e.g. traversing a loop, linked list, etc.) This is useful because for any software requirement, there likely are a number of solutions, and finding the best solution is a balancing act between the most efficient, the quickest, and the easiest-to-program techniques. [more info on Big O: http://www.cs.nott.ac.uk/~nza/G5BADS/l2.html] Other engineers who focus on signal processing (or digital signal processing -- DSP) generally use a significant amount of complicated calculus on a daily basis. Digital signal processing (DSP) involves converting radio waves, for instance, into a digitial wave or stored item before maybe being converted back to video or audio. This type of development requires the use of advanced Fourier transforms (similar to LaPlace transforms.) [more info on DSP/Fourier transforms: http://www.siglab.ece.umr.edu/ee301/dsp/] As stated in the comments, other forms of mathematics would be used for other projects such as graphic manipulation (e.g. in that case, you'd be making significant use of matrix operations -- e.g. linear algebra.) Again, the list would go on-and-on. Based on my searching (you may want to search for "computer science" as opposed to "software engineering",) ome books to check out (some of them textbooks) include: Concrete Mathematics : A Foundation for Computer Science [http://www.amazon.com/exec/obidos/ASIN/0201558025/qid=1021227524/sr=2-1/ref=sr_2_1/102-9680665-6336926] A Course in Combinatorics [http://www.amazon.com/exec/obidos/ASIN/0521422604/ref=cm_bg_f_1/102-9680665-6336926] Lastly, through my searching of these books I also found a page at Amazon.com devoted to talking about "being a software engineer" and what it takes. Its first major demand is "It all starts with math, sorry. Mathematics is required for an understanding of computer science. The barnch of math called Combinatorics is required if you are going to be able to estimate the run time or memory requirements of a program." This page also lists a number of resources to get you started. [more info: http://www.amazon.com/exec/obidos/tg/guides/guide-display/-/2GO1L16X29VU0/qid=1021227805/sr=18-1/ref=sr_18_1/102-9680665-6336926] I hope this works out for you. --Joey``` Request for Answer Clarification by tpearlv-ga on 12 May 2002 17:52 PDT ```Well i really need answer about real Software Engineering Formal Methods,(not graphics and signal processing) Software Engineering encompasses topics like Project Management, Software Requirements Analysis, Software verification and Validation, Testing and Quality Assurance. I know quality assurance techniques are based on Statistical Methods but how and where Set theory and Mathematical Logic is used? and how?``` Request for Answer Clarification by tpearlv-ga on 12 May 2002 17:57 PDT ```And how Formal Languages and Automata theory is used? is it used only in Compiler Design?``` Clarification of Answer by joey-ga on 12 May 2002 20:19 PDT ```At that level, much of what you speak of is similar to what industrial engineers do -- I'm an industrial engineering student, myself -- (only more focused on software) . . . e.g. quality assurance, effeciency testing, defect tracking, etc. I found a PDF'd power-point presentation detailing lots of what's involved in software engineering. Both in this presentation and elsewhere from what I've found searching, formal methods are on the way out, as many people now don't find them cost effective or worth the time. They do use a considerable amount of statistics, and control charts (and associated techniques like FMECA -- Failure Mode and Critical Effects Analysis) for statistical defect testing. It gives a rough overview of how software engineers run testing, and the mathematical models they use to decide when to stop testing (reliability growth models, etc.) Based on this description of software engineering, it seems much of it is traditional project management, etc. which uses little mathematics. The mathematics that is used is generally in (as you suggest) the quality assurance facets of it, and it tends to focus on control models, etc. It seems the most useful mathematics for software engineers would be statistical methods of quality improvements, etc. [more info: http://www.cl.cam.ac.uk/Teaching/2001/SWEng1/SoftwareEng1aNotes.pdf] Please let me know if I can be of more assistance.```

 ```Formal methods tend to use logic type arguments. You may be interested in www.cas.mcmaster.ca/~baber/Courses/46L03/MRSDLect.pdf```
 ```"Software Engineering" typically refers to the creation of large-scale pieces of software, like a word processor, a web browser, etc. When a company refers to someone as a "software engineer," they usually mean either a programmer (someone who creates very specific portions of code for a larger project) or a project manager (someone who designs the high-level setup of the project, the goals, coordination, etc.). If this is what you mean by software engineering, then very little mathematics is used in the process, typically. All forms of mathematics are involved in Computer Science and many programming projects. This depends on the field of specialization within CS or the program being created. For example, if you are engineering a piece of graphics software, you would find books about Linear Algebra very useful, as well as fast Numerical Methods. Perhaps Topology and Geometry would be of use as well. If you were engineering a network system, you would need knowledge about Graph Theory and Complexity. The list goes on and on. If you have a more specific question in mind, I can perhaps answer it with more specific references.```
 ```I'm pretty sure that where "Big O" is concerned the O stands for Order. It's big because it's usually written 'big' (upper-case). It is used to help model the behaviour of an algorithm with varying sizes of data. For example, if an algorithm took n * n time to compute a problem, where n was the size of the data, then the algorithm would be order n to the 2nd power, as n times n gives you n squared. Often written "O(n^2)". The point of the big O notation however is what it leaves out. If an algorithm took n + 5n + (n * n * n) time to complete a problem it would simply be O(n^3), as the difference in execution time generated by the 'n' and '5n' parts of the equation are pretty insignificant compared to the n to the 3rd power at the end, which is the real processor hog. My understanding of this comes mainly from the book 'Data Structures and Algorithms in Java' by Goodrich and Tamassia.```
 ```petermcm-ga wrote: > I'm pretty sure that where "Big O" is concerned the O stands for > Order. It's big because it's usually written 'big' (upper-case). I can see how you'd make this mistake since people usually describe a computer algorithm as being of "the order n-squared" or somesuch. "Big O" is actually the greek letter omikron and can be written either lower or upper case. The big and little 'O' are associated with four other mathematical functions for describing algorithmic performance. They are, big and little "theta" and big and little "omega". Each has its own meaning in terms of the amount of work done by an algorithm to produce a result. Big O is particularly particularly useful because it describes the upper bound of an algorithms performance. I.e., it describes the "worst case scenario" which is useful, for example, in establishing how long it will take you to search a text file of 'n' characters for all occurances of an 'm' character string. At some point in the software engineering life cycle, this math always comes into play. Even when it is ignored by the engineers. I don't have the title of my college text in front of me, but I would look for a book focused on the math -- as opposed to the programming language -- since this question is about math. ;-)```