1 Operating Systems CSCI 411 IntroductionContent from Operating Systems in Depth Thomas w. Doeppner ©2011 Operating Systems Principles & Practices by Anderson and Dahlin 2014 Tannenbaum 2015 M. Doman
2 Main Points (for today)Operating system definition Software to manage a computer’s resources for its users and applications OS challenges Reliability, security, responsiveness, portability, … OS history
3 In the Beginning … There was hardware And not much else processorstorage card reader tape drive drum And not much else no operating system no libraries no compilers See for a description and photos of the IBM 701 (announced in April, 1952). This was clearly not the first computer, but it was the first computer to have something resembling an operating system (introduced somewhat later).
4 OS History Walk through some of the connections: Windows dragged in some tech from VMS and UNIX If relevant, draw in Toy, Nachos, Pintos Also: Pilot, Taos -> Nachos
5 IBM 650 OS: none A photo can be found at and is of the IBM 650 used by the FAA for air traffic control, though it was primarily used for commercial data processing. The IBM 650 was announced in 1953 and was the most popular computer in the 1950s. It was last made in 1962 and support was terminated in It apparently had nothing resembling an operating system. get photo from:
6 Programming Without an OSAssemble all software into a deck of punched cards Get 15-minute computer slot pay $75 ($611 in 2010 dollars) mount tapes containing data read cards into computer run program it probably crashes output (possibly a dump) goes to printer Steps 1, 2, 3, and 5 take 10 minutes leaving 5 minutes for step 4! The inflation calculation is from The notion of 15-minute slots and the $300/hour cost are from (Ryckman 1983).
7 Enter the OS … Group jobs into batches Setup done for all collectivelySoftware doing this called Input/Output System the first operating system
8 Operating System Job Provide a user program with better, simpler, cleaner, model of the computer Handle management of all computer resources If
9 Early Operating Systems: Computers Very ExpensiveOne application at a time Had complete control of hardware OS was runtime library Users would stand in line to use the computer Batch systems Keep CPU busy by having a queue of jobs OS would load next job while current one runs Users would submit jobs, and wait, and wait, and And wait…
10 Batch Systems (1) Figure 1-3. An early batch system. (a) Programmers bring cards to (b) 1401 reads batch of jobs onto tape. Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
11 Figure 1-4. Structure of a typical Fortran job.Batch Systems Figure 1-4. Structure of a typical Fortran job. Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
12 Time-Sharing Operating Systems: Computers and People ExpensiveMultiple users on computer at same time Multiprogramming: run multiple programs at same time Interactive performance: try to complete everyone’s tasks quickly As computers became cheaper, more important to optimize for user time, not computer time
13 Bonus Thought QuestionHow should an operating system allocate processing time between competing uses? Give the CPU to the first to arrive? To the one that needs the least resources to complete? To the one that needs the most resources? What if you need to allocate memory? Disk?
14 Today’s Operating Systems: Computers CheapSmartphones Embedded systems Web servers Laptops Tablets Virtual machines
15 Tomorrow’s Operating SystemsGiant-scale data centers Increasing numbers of processors per computer Increasing numbers of computers per user Very large scale storage
16 What is an operating system?Software to manage a computer’s resources for its users and applications In some sense, OS is just a software engineering problem: how do you convert what the hardware gives you into something that the application programmers want? For any OS area (file systems, virtual memory, networking, CPU scheduling), begin by asking two questions: what’s the hardware interface? (the physical reality) what’s the application interface? (the nicer abstraction) We’ve broken it down a bit: we have users and their applications. You probably already know about libraries – that applications can be linked with code that helps them do their job, e.g., like the C library, with malloc and free and string operations. But when you write an app, you write it as if it has the entire machine – you don’t need to worry about the fact that there are multiple other apps running at the same time. It doesn’t crash just because one of those other apps has a bug. This interface is an abstract virtual machine – an abstraction that allows programmers to ignore the OS. Much of the OS runs in “kernel mode” – we’ll describe that in the next lecture. That code provides applications the abstraction of their own dedicated hardware. And under all of that is another abstraction, that allows the OS to run on a variety of different hardware – this is the HAL. That way, you can change the underlying hardware, without changing (much of) the OS.
17 Components of a Modern Computer (2)Figure 1-1. Where the operating system fits in. Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
18 The Operating System as a Resource ManagerTop down view Provide abstractions to application programs Bottom up view Manage pieces of complex system Alternative view Provide orderly, controlled allocation of resources Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
19 The Operating System as an Extended MachineFigure 1-2. Operating systems turn ugly hardware into beautiful abstractions. Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
20 Operating System RolesReferee: Resource allocation among users, applications Isolation of different users, applications from each other Communication between users, applications Illusionist Each application appears to have the entire machine to itself Infinite number of processors, (near) infinite amount of memory, reliable storage, reliable network transport Glue Libraries, user interface widgets, … Each of these points is pretty complex! But we’ll see a lot of examples. 90% of the code in an OS is in the glue – but its mostly easy to understand, so we won’t spend any time on it. Consider the illusion though – you buy a new computer, with more processors than the last one. But you don’t have to get all new software – the OS is the same, the apps are the same, but the system runs faster. How does it do that? You buy more memory – you don’t change the OS, you don’t change the apps, but the system runs faster. How does it do that? Part of the answer is that the OS serves as the referee between applications. How much memory should everyone have? How much of the CPU? The OS also has to isolate the different applications and users from each other – if one app crashes, you don’t want it to require a system reboot. If one user writes a buggy app on attu, you don’t want it to crash the system for everyone else. How is that even possible?
21 Thought Question What do you need from hardware to be able to:Isolate different applications from each other? Isolate different users from accessing each others files? Ask audience for ideas – they’ve taken machine structures.
22 Example: web service How does the server manage many simultaneous client requests? How do we keep the client safe from spyware embedded in scripts on a web site? How do we keep updates to the web site consistent?
23 OS Challenges Reliability SecurityDoes the system do what it was designed to do? Availability What portion of the time is the system working? Mean Time To Failure (MTTF), Mean Time to Repair Security Can the system be compromised by an attacker? Privacy Data is accessible only to authorized users Both require very careful design and code An excuse to define some terms!
24 OS Challenges Performance Latency/response time Throughput OverheadHow long does an operation take to complete? Throughput How many operations can be done per unit of time? Overhead How much extra work is done by the OS? Fairness How equal is the performance received by different users? Predictability How consistent is the performance over time?
25 What are Operating Systems?Possible definitions: the code that {Microsoft, Apple, Linus, Google} provides the code that you didn’t write the code that runs in privileged mode the code that makes things work the code that makes things crash etc.
26 OS Challenges Portability For programs: For the operating systemApplication programming interface (API) Abstract machine interface For the operating system Hardware abstraction layer Hardware-specific OS kernel routines
27 System Calls Sole interface between user and kernelImplemented as library routines that execute trap instructions to enter kernel Errors indicated by returns of –1; error code is in errno if (write(fd, buffer, bufsize) == –1) { // error! printf("error %d\n", errno); // see perror } System calls, such as fork, execv, read, write, etc., are the only means for application programs to communicate directly with the kernel: they form an API (application program interface) to the kernel. When a program calls such a routine, it is actually placing a call to a subroutine in a system library. The body of this subroutine contains a hardware-specific trap instruction which transfers control and some parameters to the kernel. On return to this library return, the kernel provides an indication of whether or not there was an error and what the error was. The error indication is passed back to the original caller via the functional return value of the library routine. If there was an error, a positive-integer code identifying it is stored in the global variable errno. Rather than simply print this code out, as shown in the slide, one might instead print out an informative error message. This can be done via the perror routine.
28 System Calls Kernel portion of address space trap into kernelkernel text other stuff kernel stack Kernel portion of address space trap into kernel User portion of address space write(fd, buf, len)
29 Operating Systems Abstraction Concernsproviding an “appropriate” interface for applications Concerns performance time and space sharing and resource management failure tolerance security marketability The purpose of an operating system is to make the underlying hardware useful. Rather than forcing programmers to deal with the complicated details of such hardware, more convenient idealized abstractions of the hardware are devised, implemented, and made available, replacing the direct interface to the hardware. Thus, as discussed in upcoming slides, we deal with files rather than disks, virtual memory rather than real memory, threads rather than processors, etc. However, in operating systems, even more so than in “ordinary” applications we must cope with the concerns listed in the slide. Performance is a huge issue. Some portions of the OS might be invoked often, say thousands of times per second. Reducing the number of instructions used is extremely important; we must also be concerned with how effectively the code uses the processor’s caches. Use of memory, both primary and secondary, is also important. If too much memory is used, the operating system might not “fit” in the available resources. Furthermore, too much memory use may require shuffling of code and data between primary and secondary storage, thus wasting time. We must also provide various forms of sharing: not only must different applications coexist on a computer at once, but different users might be using the same computer at the same time. Our operating system must be tolerant of failures: bugs in one application must not bring down another. Problems with hardware should not cause catastrophic failure. Security is increasingly a problem. We must protect the operating system, the applications running on it, and data maintained by it, from attack. (This is clearly not a solved problem!) Finally, for an OS to be successful, people must want to use it. It’s one thing to build an operating system that does an excellent job dealing with everything mentioned above, but if none of our existing applications can be made to run on it and we have to write everything from scratch, no one is going to want to use it.
30 Abstractions Hardware disks memory processors network monitor keyboardmouse Operating system files programs threads of control communication windows, graphics input locator The typical user has no desire to deal with the complexities of the underlying hardware, preferring instead to deal with abstractions and depend upon the operating system to map these abstractions into reality. For example, the user may have information that needs to be accessed and perhaps updated over long periods of time. While such information may well be stored on disk, the user probably doesn’t care how. The user’s computation will certainly use temporary storage that need not exist longer than the computation, but should be quickly accessible during the lifetime of the computation. The user’s computation will need a “compute engine” to carry out the computation; in some cases it will need multiple compute engines. Finally, the user may need to communicate with other users or computations on other computers, and may need to access information stored elsewhere. Designing the most suitable abstractions for an operating system to present to its users is a subject of continuing research. Soon we examine the abstractions provided by a particular operating system, Unix, that is in widespread use on computer workstations and larger machines. Later in the course we discuss alternate abstractions. Unix provides a relatively simple collection of abstractions (though many argue that Unix has become much too complicated). In particular, permanent storage is represented as files. We think of temporary storage as being built from the abstraction of primary memory, known as virtual memory. Each computation has its own private virtual memory—computations are known in Unix as processes. Traditionally, each process is embodied as a single abstract compute engine, but more recently the concept of multithreaded processes has been introduced. In this model, each process may contain multiple compute engines, or threads, each capable of independent computation. Communication is not a very well developed concept in Unix systems. The general idea is that independent processes can transfer data back and forth among one another. We explore this idea and newer approaches to handling communication in the last section of the course.
31 Programs Disk Disk Memory Data Code Processor ProcessorA program is an abstraction that we view as consisting of data and code. Somehow we’ve got to build programs from the pieces of hardware available to us, in particular memory, disks, and processors. Processor Processor
32 A Program const int nprimes = 100; int prime[nprimes]; int main() { int i; int current = 2; prime[0] = current; for (i=1; i
33 Processes Fundamental abstraction of program execution memoryprocessor(s) each processor abstraction is a thread “execution context” Unix, as do many operating systems, uses the notion of a process as its fundamental abstraction of program execution. Each program runs in a separate process. Processes are protected from one another in the sense that the actions of one process cannot directly harm others. The abstraction comprises the memory of a program (known as its address space—the collection of locations that can be referenced by the process), the execution agents (processor abstractions), and other information, known collectively as the execution context, representing such things as the files the process is currently accessing, how it responds to exceptions, to external stimuli, etc. The processor abstraction is often called a thread. In “traditional” Unix programs, processes have only one thread, so we’ll use the word process to include the single thread running inside of it. Later in this course, when we cover multithreaded programming, we’ll be more careful and use the word thread when we are discussing the processor abstraction.
34 The Unix Address Space stack dynamic bss data textA Unix process’s address space appears to be three regions of memory: a read-only text region (containing executable code); a read-write region consisting of initialized data (simply called data), uninitialized data (BSS—a directive from an ancient assembler (for the IBM 704 series of computers), standing for Block Started by Symbol and used to reserve space for uninitialized storage), and a dynamic area; and a second read-write region containing the process’s user stack (a standard Unix process contains only one thread of control). The first area of read-write storage is often collectively called the data region. Its dynamic portion grows in response to sbrk system calls. Most programmers do not use this system call directly, but instead use the malloc and free library routines, which manage the dynamic area and allocate memory when needed by in turn executing sbrk system calls. The stack region grows implicitly: whenever an attempt is made to reference beyond the current end of stack, the stack is implicitly grown to the new reference. (There are system-wide and per-process limits on the maximum data and stack sizes of processes.)
35 Process Control BlocksPID Terminated children Link Return code Process Control Block
36 Memory Figure 1-8. (a) A quad-core chip with a shared L2 cache. (b) A quad-core chip with separate L2 caches. Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
37 Files Memory Disk Disk File systems provide a simple abstraction of permanent storage, i.e., storage that doesn’t go away when your program terminates or you shut down the computer. A user accesses information, represented as files, and the operating system is responsible for retrieving and storing this information. We expect files to last a long time and we associate names (as opposed to addresses) with files.
38 The File Abstraction A file is a simple array of bytesFiles are made larger by writing beyond their current end Files are named by paths in a naming tree System calls on files are synchronous As discussed three pages ago, most programs perform file I/O using library code layered on top of kernel code. In this section we discuss just the kernel aspects of file I/O, looking at the abstraction and the high-level aspects of how this abstraction is implemented. The Unix file abstraction is very simple: files are simply arrays of bytes. Many systems have special system calls to make a file larger. In Unix, you simply write where you’ve never written before, and the file “magically” grows to the new size (within limits). The names of files are equally straightforward—just the names labeling the path that leads to the file within the directory tree. Finally, from the programmer’s point of view, all operations on files appear to be synchronous—when an I/O system call returns, as far as the process is concerned, the I/O has completed. (Things are different from the kernel’s point of view, as discussed later.)
39 File Abstraction IssuesNaming Allocating space on disk (permanent storage) organized for fast access minimize waste Shuffling data between disk and memory (high-speed temporary storage) Coping with crashes Among the issues in implementing the notion of files are those mentioned above.
40 File-Descriptor Table1 2 3 File descriptor ref count access mode file location inode pointer . . . User address space n–1 Kernel address space
41 Naming (almost) everything has a path name files directoriesdevices (known as special files) keyboards displays disks etc. The notion that almost everything in Unix has a path name was a startlingly new concept when Unix was first developed; one that has proven to be important.
42 File Access PermissionsWho’s allowed to do what? who user (owner) group others (rest of the world) what read write execute Each file has associated with it a set of access permissions indicating, for each of three classes of principals, what sorts of operations on the file are allowed. The three classes are the owner of the file, known as user, the group owner of the file, known simply as group, and everyone else, known as others. The operations are grouped into the classes read, write, and execute, with their obvious meanings. The access permissions apply to directories as well as to ordinary files, though the meaning of execute for directories is not quite so obvious: one must have execute permission for a directory file in order to follow a path through it. The system, when checking permissions, first determines the smallest class of principals the requester belongs to: user (smallest), group, or others (largest). It then, within the chosen class, checks for appropriate permissions.
43 I/O Devices Figure (a) The steps in starting an I/O device and getting an interrupt. Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
44 Figure 1-10. Structure of a disk drive.Disks Figure Structure of a disk drive. Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
45 I/O Devices Figure (b) Interrupt processing involves taking the interrupt, running the interrupt handler, and returning to the user program. Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
46 Memory Sharing (1) Program 1 Program 2 Program 3 Operating SystemSo that multiple programs, each with their own private libraries and each accessing shared libraries, can coexist without destroying each other, we must have some means of protecting one program’s memory from another’s. Furthermore, we must protect the operating system from being inadvertently or maliciously attacked by programs. Using various hardware features, each program is prevented from accessing private portions of other programs. Shared portions can be accessed, but only in a read-only fashion. When the operating system is executing, it can access the user programs. But when user programs are executing, they can access the operating system only to do prescribed requests. Memory
47 Scheduling Processes Avoiding Deadlocks
48 Virtual Machines Figure (a) A type 1 hypervisor. (b) A pure type 2 hypervisor. (c) A practical type 2 hypervisor. Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
49 Security Malware and DefensesTanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
50
51 Memory Sharing (2) Program 1 Program 2 Program 3 MemoryAnother memory-sharing issue arises when the programs that are to share memory are too big to fit in all at once (or perhaps even individually). Program 3
52 Virtual Memory Disk Disk Program 1 Memory Program 2 Program 3One popular approach for dealing with memory-protection and -fitting issues is to employ virtual memory. A program executes in a “virtual address space,” which is implemented partially on real memory and partially on disks. A facility supported as a joint effort of hardware and the operating-system kernel “maps” each program’s references of virtual memory into real memory and disk space. Since the operating system controls this mapping, it determines who can access what. Program 3
53 Hardware Disk Disk Processor Memory Network ProcessorAs we’ve just mentioned, one of the major purposes of an operating system is to give its users an easy-to-use abstraction of a fairly complicated reality. A typical computer system consists of a number of disk drives (which the operating system accesses through specialized hardware called controllers), a network interface (also accessed via a controller), some sort of display/keyboard/mouse combination, and some amount of primary storage. Disk Disk
54 Concurrency Virtual Processor ProcessorComputers are typically multiplexed among a number of users. Multiplexing is a concept that is well over thirty years old—many users share a computer by dividing its processing time among one another. It’s often known as multitasking. Another equally old, if not older, concept is the notion of concurrency within a computation. This is the ability of one computation to utilize multiple logical “compute engines,” or threads. Concurrency means that multiple threads are in progress at one time; on a computer that employs only a single real processor, the execution of these threads is multiplexed over time. Why do this? Optimize the use of computers programs typically compute, perform I/O, compute, perform I/O … when one program is performing I/O, another should be computing Optimize the use of people want several applications active at once interactive applications editing drawing background applications mail monitor printing file transfer Processor
55 Parallelism Virtual Processor Processor Processor ProcessorParallelism means that multiple threads are executing simultaneously: parallelism requires multiple processors. In this course we make this distinction between parallelism and concurrency; note, however, that not everyone distinguishes between these terms in this way. Processor Processor Processor
56 Textbook Lazowska, Spring 2012: “The text is quite sophisticated. You won't get it all on the first pass. The right approach is to [read each chapter before class and] re-read each chapter once we've covered the corresponding material… more of it will make sense then. Don't save this re-reading until right before the mid-term or final – keep up.” You will need to tell me when something is confusing.
57 1960s OS Issues Multiprogramming Time sharing Software complexitySecurity
58 2010s OS Issues Multiprogramming Time sharing Software complexitynot just one computer, but server farms Time sharing voice, video, sound, etc. Software complexity a bigger problem than could be imagined in the ’60s Security ditto
59 1950s Commercial data processing Scientific computingEarly computing was roughly split between two sorts of activities: commercial data processing and scientific computing. The former was characterized by much data processing, less computation, and “production” activities such as doing a payroll. The latter involved more computation and, generally, less data processing.
60 1960s Commercial data processing Scientific computing Time sharingIn the 1960s, commercial data processing and scientific computing continued to be important. But the decade saw the introduction of time sharing as a means for making more productive use of an expensive computer, as well as the introduction of relatively cheap minicomputers initially used in laboratories. Laboratory computing
61 Atlas Computer [get drawing from: This drawing of a “typical Atlas computer installation” can be found at (where there is also a diagram labeling what each of the components in the slide are). Its operating system supported multiprogramming and seems to be the first major step in OS design and implementation after the Input/Output System of the IBM 701. It’s most famous for including the first implementation of virtual memory, which they called a “single-level store.” It was a collaboration between the University of Manchester (UK) and Ferranti Ltd. It was officially commissioned in 1962, though it seems to have been in operation starting in 1961.
62 The IBM Mainframe OS: OS/360[get photo from: A photo of an IBM System/360 can be found at The IBM System/360 comprised a range of computers. It was most successful in the realm of commercial data processing.
63 IBM 7094 OS: CTSS (among others) [get photo from: A photo of an IBM 7094 can be found at It was modified at MIT and used as the hardware platform for CTSS (compatible time-sharing system), which seems to be the first time-sharing system.
64 Multics [get photo from: A photo of a Honeywell 6180 can be found at This machine was the successor to the GE 645 (Honeywell had purchased GE’s computer business), on which Multics first ran. The 6180 incorporated many improvements and hosted the second generation of Multics.
65 DEC PDP-8 OS: many: ranging from primitive to interesting (a multi-user time-sharing system; a virtual-machine system) The PDP-8, introduced in 1965 (the photo is from was the first “minicomputer” and was cheap enough to be used in small laboratories. Its manufacturer (DEC: Digital Equipment Corporation) as well as other companies produced numerous kinds of such minicomputers and an even greater number of operating systems for them. This OS development continued well into the 1970s. A discussion of the operating systems written for the PDP-8 can be found at
66 Unix [get photo from: A photo of Dennis Ritchie and Ken Thompson, the original developers of Unix, in front of a DEC PDP-11 can be found at though the photo appears to be owned by Lucent Technologies. Unix was originally implemented on a DEC PDP-7, but was soon ported to the more-capable PDP-11.
67 History of ConcurrencyMultiprogramming 1961, 1962: Atlas, B5000 1965: OS/360 MFT, MVT Timesharing 1961: CTSS (developed by MIT for IBM 7094); BBN time-sharing system for DEC PDP-1 mid 60s Dartmouth Timesharing System (DTSS) TOPS-10 (DEC) late 60s Multics (MIT, GE, Bell Labs) Unix (Bell Labs) Multiprogramming refers to the notion of having multiple programs active at the same time so that when the current program can no longer run (for example because it’s blocked waiting for I/O), another is available to run. Timesharing is an extension of multiprogramming in which the execution of the active programs is time-sliced: each program runs for a short period of time, then another is run. A good web site with useful information on Multics is A brief description of CTSS can be found at
68 Apple’s Multitasking AnnouncementWith Preemptive Multitasking, Everything Happens at Once In today’s fast-paced world, you rarely get to do one thing at a time. Even in the middle of transforming, say, a Photoshop file, you may need to find a crucial piece of information on the web while you compose an urgent reply to a customer. What you need is a computer that can handle several different tasks at once, giving priority to your primary application, but still crunching away at other jobs in the background. … Darwin makes this possible by incorporating a powerful concept called preemptive multitasking. … — Apple website, September 2000 Concurrency as only recently been employed in what are now called personal computers. So what’s preemptive multitasking? It’s multiprogramming in which one program can preempt the execution of another. In other words, it’s a necessary ingredient of timesharing, the notion that was first implemented in the 1960s.
69 History of Virtual Memory1961: Atlas computer, University of Manchester, UK 1962: Burroughs B5000 1972: IBM OS/370 1979: 3 BSD Unix, UC Berkeley 1993: Microsoft Windows NT 3.1 2000: Apple Macintosh OS X The slide lists milestones in the history of virtual memory, from its first instance on Manchester’s Atlas computer in 1961 (when a working prototype was completed), to Apple’s announcement of VM support for the Macintosh.
70 Apple’s VM Announcement…Welcome to the Brave New World of Crash-Resistant Computing Let’s start with the notion of protected memory, and why it’s so important. … One of the ways an operating system ensures reliability is by protecting applications through a mechanism called protected memory (essentially walling off applications from each other). … Along with the protected memory mechanism, Darwin provides a super-efficient virtual memory manager to handle that protected memory space. So you no longer have to worry about how much memory an application like Photoshop needs to open large files. … — Apple website, September 2000 Apple is the latest of a long line of companies to “discover” virtual memory. The passage quoted in the slide is from Apple’s web page. “Darwin” was the code name of the kernel of Apple’s OS X operating system.
71 1970s Commercial data processing Scientific computing Time sharingLaboratory computing The 1970s saw the continued importance of the sorts computing important in the ’60s. But, what would become the most significant form of computing in later decades was introduced as personal computing and hobbyist computing. Personal computing Hobbyist computing
72 IBM’s Dominance ContinuesOS: OS/370 [get photo from: A photo of an IBM System/370 Model 168 can be found at This was a high-end model and one of the first “mainstream mainframes” to support virtual memory.
73 Scientific Computing OS: COS: single job at a timeThe photo is of a Cray-1 supercomputer in the Computer History Museum. The photo is by Ed Toton who, according to has released it to the public domain. The first Cray-1 was installed at Los Alamos National Laboratories in Cray’s philosophy was, essentially, no compromises allowed. Virtual memory was eschewed in favor of having enough real memory to make it unnecessary. Similarly a memory cache wasn’t used — all memory was as fast as most contemporary computer’s caches were. The OS, known as COS (Cray Operating System) was strictly batch-oriented and designed to run one job at a time. Jobs (including associated data) were assembled by attached mainframe computers. In essence, COS did not do a whole lot more than the Input/Output System of the IBM 701. In the 1980s, Cray adapted Unix for their machines, calling their version Unicos.
74 Xerox Alto OS: single-user, single-computation[get photo from: A photo of a Xerox Alto can be found at Produced in 1973 at Xerox’s Palo Alto Research Center (PARC), it’s generally regarded as the first computer to demonstrate the utility of a window manager and mouse. Though it was never a product, it introduced the notion of a serious personal computer. Though its historical importance is primarily in its pioneering use of bit-mapped graphics to implement a window-managed display, it had an interesting OS as well. It was strictly a single-user system and provided no protection whatsoever. However, it had a layered approach to OS design and implementation. Through its notion of “juntas,” one could remove layers of the OS, one at a time, just in case one needed the storage for a large program (and one could read them back from disk). A great deal of Alto documentation can be found at
75 MITS ALTAIR 8800 OS: none [get photo from: A photo of a MITS ALTAIR 8800 can be found at Introduced in 1975, it was an early, if not the first, “hobbyist computer.” It was also the platform on which Microsoft’s first product, a BASIC interpreter, ran. It had no operating system, just standalone programs (in particular, a BASIC interpreter).
76 CP/M Control Program for Microcomputers 1974 first hobbyist OSsupported Intel 8080 and other systems clear separation of architecture-dependent code no multiprogramming no protection CP/M was the first OS affordable to individual hobbyists.
77 Apple II OS: none later: similar functionality as CP/M (not much)[get photo from: A photo of an Apple II can be found at It was introduced in 1977 with no OS. Later (in 1978) a simple OS was released with functionality similar to that of CP/M.
78 Microsoft Enters the OS Business: Late 1970sIt’s called … Xenix a version of Unix predominant version of Unix in the 1980s used by MS internally into the 1990s
79 VAX-11/780 OS: VMS Unix Both: time sharing virtual memoryaccess protection concurrency The VAX-11, of which the 780 was the first model, was introduced in 1978 and was noted for two operating systems. The first, VMS, was the product of its manufacturer, DEC. It is still supported by HP, the company that purchased Compaq, which was the company that purchased DEC. VMS was very much the predecessor of modern Windows. The second operating system is Unix, most notably BSD Unix. Seventh-Edition Unix was ported to the VAX-11 by Bell Labs and called Unix 32V. It did not support virtual memory. Researchers at the University of California at Berkeley used it as the basis of 3BSD Unix, which did support virtual memory. Later came 4.0BSD, 4.1BSD, up through 4.4BSD. Along the way DEC produced its own version of Unix for the VAX-11, based on 4.2 BSD, called Ultrix. It was the implementation of the TCP/IP network protocols on 4.2BSD that did much to grow the Internet, making it accessible to a large number of academic computer science departments and industrial research organizations. This photo was taken by Thomas W. Doeppner.
80 1980s Commercial data processing Scientific computing Time sharingLaboratory computing The introduction of the IBM PC co-opted the term “personal computing,” which previously had been applied to work done at Xerox PARC. High-end “computer workstations” were introduced for professional computing, running more-or-less state-of-the-art operating systems. More affordable computers, such as IBM PCs (and clones) and Apple Macintoshes, had much less capable operating systems. The advent of the PC pretty much eliminated the mini-computer market and separate computers (and operating systems) for laboratory computing. Personal Professional computing Hobbyist Personal computing
81 Two OSes Take Off Unix MS-DOSThe 1980s saw the rise of two operating systesms: Unix via BSD Unix on VAXes and Xenix on higher-end PCs, and MS-DOS on PCs. Unix MS-DOS
82 IBM PC OS: PC-DOS (aka MS-DOS) (remarkably like CP/M)[get photo from: A photo of an early IBM PC can be found at
83 The Computer WorkstationOS: Aegis supported: virtual memory distributed file system access protection concurrency An Apollo workstation, introduced in (Note the “C” on its front — this was its serial number (in hexadecimal)). It had a fairly sophisticated OS for its day. This photo was taken by Thomas W. Doeppner.
84 1990s Commercial data processing Scientific computingHigh-end personal computing The 1990s saw the convergence of the low- and high-end personal computing: hardware powerful enough to run the operating systems of high-end personal computing became affordable at the low-end. Low-end personal computing
85 Toy Operating Systems 1987: Andrew Tanenbaum of Vrije Universiteit, Amsterdam, publishes Operating Systems: Design and Implementation included is source code for a complete, though toy, operating system: Minix, sort of based on Unix 1991: Linus Torvalds buys an Intel 386 PC MS-DOS doesn’t support all its features (e.g., memory protection, multi-tasking) “soups up” Minix to support all this January 1992: Torvalds releases Linux 0.12 January 1992: Tanenbaum declares Linux obsolete
86 Late 80s/Early 90s 1988: Most major Unix vendors get together and form OSF to produce a common Unix: OSF/1, based on IBM’s AIX 1989: Microsoft begins work on NT 1990: OSF abandons AIX, restarts with Mach 1991: OSF releases OSF/1 1992: Sun releases Solaris 2 many SunOS (Solaris 1) programs are broken 1993: All major players but DEC have abandoned OSF/1 1993: Microsoft releases Windows NT 3.1 1994: Linux 1.0 released Note that the notion of a completely free (and useful) operating system didn’t really exist in the early 1990s. Unix was licensed by AT&T, who charged a hefty fee for its commercial use.
87 Late 90s IBM has three different versions of Unix, all called “AIX”1996: DEC renames its OSF/1 “Digital Unix” 1996: Microsoft releases Windows NT 4 1996: Linux 2.0 released 1998: DEC is purchased by Compaq; “Digital Unix” is renamed “Tru64 Unix” 1999: Sun’s follow-on to Solaris 2.6 is called Solaris 7
88 2000s Commercial data processing Scientific computingPersonal computing The 2000s have brought on the “gadget” as an important computing device, one deserving of an OS. Gadgets
89 The ’00s Part 1 2000: Microsoft releases Windows 2000 and Windows Me2000: Linux 2.2 is released 2000: IBM “commits” to Linux (on servers) ~2000: Apple releases OS X, based on Unix (in particular, OSF/1) 2001: Linux 2.4 is released 2001: Microsoft releases Windows XP 2002: Compaq is purchased by HP 2003: SCO claims their code is in Linux, sues IBM; IBM countersues August 10, 2007: judge rules that SCO is not the rightful owner of the Unix copyright, Novell is Novell says there is no Unix in Linux September 2007: SCO files for Chapter 11 bankruptcy protection
90 The ’00s Part 2 2004: Linux 2.6 is released2005: IBM sells PC business to Lenovo July 2005: Microsoft announces Windows Vista January 2007: Microsoft releases Windows Vista later in 2007: Microsoft starts hinting at Windows 7 April 2009: Oracle announces purchase of Sun Microsystems July 2009: Google announces Chrome OS October 2009: Microsoft releases Windows 7
91 History of C Early 1960s: CPL (Combined Programming Language)developed at Cambridge University and University of London 1966: BCPL (Basic CPL): simplified CPL intended for systems programming 1969: B: simplified BCPL (stripped down so its compiler would run on minicomputer) used to implement earliest Unix Early 1970s: C: expanded from B motivation: they wanted to play “Space Travel” on minicomputer used to implement all subsequent Unix OSes C has become the predominant language in OS development.
92 A Simple OS
93 Outline Unix overview processes file abstraction directoriesfile representation file-oriented system calls We now present a brief introduction to a few of the more important aspects of the Unix operating system. In particular, we look at the Unix notions of processes and files—two concepts that are important throughout the course. We are looking at a rather stripped-down version of Unix, comparable to Sixth-Edition Unix, the first version made available to people outside of Bell Laboratories (in the mid 1970s).
94 A Program const int nprimes = 100; int prime[nprimes]; int main() { int i; int current = 2; prime[0] = current; for (i=1; i
95 Processes Fundamental abstraction of program execution memoryprocessor(s) each processor abstraction is a thread “execution context” Unix, as do many operating systems, uses the notion of a process as its fundamental abstraction of program execution. Each program runs in a separate process. Processes are protected from one another in the sense that the actions of one process cannot directly harm others. The abstraction comprises the memory of a program (known as its address space—the collection of locations that can be referenced by the process), the execution agents (processor abstractions), and other information, known collectively as the execution context, representing such things as the files the process is currently accessing, how it responds to exceptions, to external stimuli, etc. The processor abstraction is often called a thread. In “traditional” Unix programs, processes have only one thread, so we’ll use the word process to include the single thread running inside of it. Later in this course, when we cover multithreaded programming, we’ll be more careful and use the word thread when we are discussing the processor abstraction.
96 The Unix Address Space stack dynamic bss data textA Unix process’s address space appears to be three regions of memory: a read-only text region (containing executable code); a read-write region consisting of initialized data (simply called data), uninitialized data (BSS—a directive from an ancient assembler (for the IBM 704 series of computers), standing for Block Started by Symbol and used to reserve space for uninitialized storage), and a dynamic area; and a second read-write region containing the process’s user stack (a standard Unix process contains only one thread of control). The first area of read-write storage is often collectively called the data region. Its dynamic portion grows in response to sbrk system calls. Most programmers do not use this system call directly, but instead use the malloc and free library routines, which manage the dynamic area and allocate memory when needed by in turn executing sbrk system calls. The stack region grows implicitly: whenever an attempt is made to reference beyond the current end of stack, the stack is implicitly grown to the new reference. (There are system-wide and per-process limits on the maximum data and stack sizes of processes.)
97 Modified Program const int nprimes = 100; int *prime; int main(int argc, char *argv[]) { int i; int current = 2; nprimes = atoi(argv[1]); prime = (int *)malloc(nprimes*sizeof(int)) prime[0] = current; for (i=1; i
98 Creating a Process: BeforeThe only way to create a new process is to use the fork system call. fork( ) parent process
99 Creating a Process: AfterBy executing fork the parent process creates an almost exact clone of itself which we call the child process. This new process executes the same text as its parent, but contains a copy of the data and a copy of the stack. This copying of the parent to create the child can be very time-consuming. We discuss later how it is optimized. Fork is a very unusual system call: one thread of control flows into it but two threads of control flow out of it, each in a separate address space. From the parent’s point of view, fork does very little: nothing happens to the parent except that fork returns the process ID (PID—an integer) of the new process. The new process starts off life by returning from fork. It always views fork as returning a zero. fork( ) // returns p fork( ) // returns 0 parent process child process (pid = p)
100 Process Control BlocksPID Terminated children Link Return code Process Control Block
101 Fork and Wait short pid; if ((pid = fork()) == 0) { /* some code is here for the child to execute */ exit(n); } else { int ReturnCode; while(pid != wait(&ReturnCode)) ; /* the child has terminated with ReturnCode as its return code */ }
102 Exec int pid; if ((pid = fork()) == 0) { /* we’ll soon discuss what might take place before exec is called */ execl("/home/twd/bin/primes", "primes", "300", 0); exit(1); } /* parent continues here */ while(pid != wait(0)) /* ignore the return code */ ;
103 Loading a New Image exec(prog, args) args prog’s bss prog’s dataMost of the time the purpose of creating a new process is to run a new (i.e., different) program. Once a new process has been created, it can use the exec system call to load a new program image into itself, replacing the prior contents of the process’s address space. Exec is passed the name of a file containing a fully relocated program image (which might require further linking via a runtime linker). The previous text region of the process is replaced with the text of the program image. The data, BSS and dynamic areas of the process are “thrown away” and replaced with the data and BSS of the program image. The contents of the process’s stack are replaced with the arguments that are passed to the main procedure of the program. prog’s data exec(prog, args) prog’s text Before After
104 System Calls Sole interface between user and kernelImplemented as library routines that execute trap instructions to enter kernel Errors indicated by returns of –1; error code is in errno if (write(fd, buffer, bufsize) == –1) { // error! printf("error %d\n", errno); // see perror } System calls, such as fork, execv, read, write, etc., are the only means for application programs to communicate directly with the kernel: they form an API (application program interface) to the kernel. When a program calls such a routine, it is actually placing a call to a subroutine in a system library. The body of this subroutine contains a hardware-specific trap instruction which transfers control and some parameters to the kernel. On return to this library return, the kernel provides an indication of whether or not there was an error and what the error was. The error indication is passed back to the original caller via the functional return value of the library routine. If there was an error, a positive-integer code identifying it is stored in the global variable errno. Rather than simply print this code out, as shown in the slide, one might instead print out an informative error message. This can be done via the perror routine.
105 System Calls Kernel portion of address space trap into kernelkernel text other stuff kernel stack Kernel portion of address space trap into kernel User portion of address space write(fd, buf, len)
106 Multiple Processes other stuff kernel stack other stuff kernel stackkernel text Each process has its own user address space, but there’s a single kernel address space. It contains context information for each user process, including the stacks used by each process when executing system calls.
107 The File Abstraction A file is a simple array of bytesFiles are made larger by writing beyond their current end Files are named by paths in a naming tree System calls on files are synchronous As discussed three pages ago, most programs perform file I/O using library code layered on top of kernel code. In this section we discuss just the kernel aspects of file I/O, looking at the abstraction and the high-level aspects of how this abstraction is implemented. The Unix file abstraction is very simple: files are simply arrays of bytes. Many systems have special system calls to make a file larger. In Unix, you simply write where you’ve never written before, and the file “magically” grows to the new size (within limits). The names of files are equally straightforward—just the names labeling the path that leads to the file within the directory tree. Finally, from the programmer’s point of view, all operations on files appear to be synchronous—when an I/O system call returns, as far as the process is concerned, the I/O has completed. (Things are different from the kernel’s point of view, as discussed later.)
108 Standard File Descriptorsmain( ) { char buf[BUFSIZE]; int n; const char* note = "Write failed\n"; while ((n = read(0, buf, sizeof(buf))) > 0) if (write(1, buf, n) != n) { (void)write(2, note, strlen(note)); exit(EXIT_FAILURE); } return(EXIT_SUCCESS); The file descriptors 0, 1, and 2 are opened to access your window when you log in, and are preserved across forks, unless redirected.
109 Back to Primes … int nprimes; int *prime; int main(int argc, char *argv[]) { … for (i=1; i
110 Human-Readable Outputint nprimes; int *prime; int main(int argc, char *argv[]) { … for (i=1; i
111 Running It if (fork() == 0) { /* set up file descriptor 1 in the child process */ close(1); if (open("/home/twd/Output", O_WRONLY) == -1) { perror("/home/twd/Output"); exit(1); } execl("/home/twd/bin/primes", "primes", "300", 0); /* parent continues here */ while(pid != wait(0)) /* ignore the return code */ ;
112 File-Descriptor Table1 2 3 File descriptor ref count access mode file location inode pointer . . . User address space n–1 Kernel address space
113 Allocation of File DescriptorsWhenever a process requests a new file descriptor, the lowest numbered file descriptor not already associated with an open file is selected; thus #include
114 Redirecting Output … Twiceif (fork() == 0) { /* set up file descriptors 1 and 2 in the child process */ close(1); close(2); if (open("/home/twd/Output", O_WRONLY) == -1) { exit(1); } execl("/home/twd/bin/program", "program", 0); /* parent continues here */
115 Redirected Output File-descriptor table inode pointer 1 inode pointerWRONLY inode pointer File descriptor 1 File descriptor 2 1 WRONLY inode pointer User address space Kernel address space
116 Redirected Output After WriteFile-descriptor table 1 WRONLY 100 inode pointer File descriptor 1 File descriptor 2 1 WRONLY inode pointer User address space Kernel address space
117 Sharing Context Informationif (fork() == 0) { /* set up file descriptors 1 and 2 in the child process */ close(1); close(2); if (open("/home/twd/Output", O_WRONLY) == -1) { exit(1); } dup(1); /* set up file descriptor 2 as a duplicate of 1 */ execl("/home/twd/bin/program", "program", 0); /* parent continues here */
118 Redirected Output After DupFile-descriptor table File descriptor 1 2 WRONLY 100 inode pointer File descriptor 2 User address space Kernel address space
119 Fork and File Descriptorsint logfile = open("log", O_WRONLY); if (fork() == 0) { /* child process computes something, then does: */ write(logfile, LogEntry, strlen(LogEntry)); … exit(0); } /* parent process computes something, then does: */
120 File Descriptors After Forklogfile Parent’s address space 2 WRONLY inode pointer logfile Child’s address space Kernel address space
121 Naming (almost) everything has a path name files directoriesdevices (known as special files) keyboards displays disks etc. The notion that almost everything in Unix has a path name was a startlingly new concept when Unix was first developed; one that has proven to be important.
122 Uniformity int file = open("/home/twd/data", O_RDWR);// opening a normal file int device = open("/dev/tty", O_RDWR); // opening a device (one’s terminal // or window) int bytes = read(file, buffer, sizeof(buffer)); write(device, buffer, bytes); This notion that everything has a path name facilitates a uniformity of interface. Reading and writing a normal file involves a different set of internal operations than reading and writing a device, but they are named in the same style and the I/O system calls treat them in the same way. What we have is a form of polymorphism (though the term didn’t really exist when the original Unix developers came up with this way of doing things). Note that the open system call returns an integer called a file descriptor, used in subsequent system calls to refer to the file.
123 Directories unix etc home pro dev motd twd unix ... slide1 slide2passwd motd twd unix ... Here is a portion of a Unix directory tree. The ovals represent files, the rectangles represent directories (which are really just special cases of files). slide1 slide2
124 Directory RepresentationComponent Name Inode Number directory entry . 1 .. 1 unix 117 etc 4 A directory consists of an array of pairs of component name and inode number, where the latter identifies the target file’s inode to the operating system (an inode is data structure maintained by the operating system that represents a file). Note that every directory contains two special entries, “.” and “..”. The former refers to the directory itself, the latter to the directory’s parent (in the case of the slide, the directory is the root directory and has no parent, thus its “..” entry is a special case that refers to the directory itself). home 18 pro 36 dev 93
125 Hard Links % ln /unix /etc/image # link system call unix etc home prodev twd unix ... image motd Here are two directory entries referring to the same file. This is done, via the shell, through the ln command which creates a (hard) link to its first argument, giving it the name specified by its second argument. The shell’s “ln” command is implemented using the link system call. slide1 slide2 % ln /unix /etc/image # link system call
126 Directory Representation. 1 .. 1 unix 117 etc 4 home 18 pro 36 dev 93 Here are the (abbreviated) contents of both the root (/) and /etc directories, showing how /unix and /etc/image are the same file. Note that if the directory entry /unix is deleted (via the shell’s “rm” command), the file (represented by inode 117) continues to exist, since there is still a directory entry referring to it. However if /etc/image is also deleted, then the file has no more links and is removed. To implement this, the file’s inode contains a link count, indicating the total number of directory entries that refer to it. A file is actually deleted only when its inode’s link count reaches zero. Note: suppose a file is open, i.e. is being used by some process, when its link count becomes zero. Rather than delete the file while the process is using it, the file will continue to exist until no process has it open. Thus the inode also contains a reference count indicating how many times it is open: in particular, how many system file table entries point to it. A file is deleted when and only when both the link count and this reference count become zero. The shell’s “rm” command is implemented using the unlink system call. Note that /etc/.. refers to the root directory. . 4 .. 1 image 117 motd 33
127 Soft Links unix etc home pro dev twd unix ... mylink image twd /unixDiffering from a hard link, a soft link (or symbolic link) is a special kind of file containing the name of another file. When the kernel processes such a file, rather than simply retrieving its contents, it makes use of the contents by replacing the portion of the directory path that it has already followed with the contents of the soft-link file and then following the resulting path. Thus referencing /home/twd/mylink results in the same file as referencing /unix. Referencing /etc/twd/unix/slide1 results in the same file as referencing /home/twd/unix/slide1. The shell’s “ln” command with the “-s” flag is implemented using the symlink system call. /unix slide1 slide2 /home/twd % ln –s /unix /home/twd/mylink % ln –s /home/twd /etc/twd # symlink system call
128 Working Directory Maintained in kernel for each processpaths not starting from “/” start with the working directory changed by use of the chdir system call displayed (via shell) using “pwd” how is this done? The working directory is maintained (as the inode number (explained subsequently) of the directory) in the kernel for each process. Whenever a process attempts to follow a path that doesn’t start with “/”, it starts at its working directory (rather than at “/”).
129 Open #include
130 File Access PermissionsWho’s allowed to do what? who user (owner) group others (rest of the world) what read write execute Each file has associated with it a set of access permissions indicating, for each of three classes of principals, what sorts of operations on the file are allowed. The three classes are the owner of the file, known as user, the group owner of the file, known simply as group, and everyone else, known as others. The operations are grouped into the classes read, write, and execute, with their obvious meanings. The access permissions apply to directories as well as to ordinary files, though the meaning of execute for directories is not quite so obvious: one must have execute permission for a directory file in order to follow a path through it. The system, when checking permissions, first determines the smallest class of principals the requester belongs to: user (smallest), group, or others (largest). It then, within the chosen class, checks for appropriate permissions.
131 Permissions Example % ls -lR .: total 2 drwxr-x--x 2 tom adm 1024 Dec 17 13:34 A drwxr tom adm 1024 Dec 17 13:34 B ./A: total 1 -rw-rw-rw- 1 tom adm 593 Dec 17 13:34 x ./B: -r--rw-rw- 1 tom adm 446 Dec 17 13:34 x -rw----rw- 1 trina adm 446 Dec 17 13:45 y In the current directory are two subdirectories, A and B, with access permissions as shown in the slide. Note that the permissions are given as a string of characters: the first character indicates whether or not the file is a directory, the next three characters are the permissions for the owner of the file, the next three are the permissions for the members of the file’s group’s members, and the last three are the permissions for the rest of the world. Quiz: the users tom and trina are members of the adm group; andy is not. May andy list the contents of directory A? May andy read A/x? May trina list the contents of directory B? May trina modify B/y? May tom modify B/x? May tom read B/y?
132 Setting File Permissions#include
133 Creating a File Use either open or creatopen (const char *pathname, int flags, mode_t mode) flags must include O_CREAT creat(const char *pathname, mode_t mode) open is preferred The mode parameter helps specify the permissions of the newly created file permissions = mode & ~umask Originally in Unix one created a file only by using the creat system call. A separate O_CREAT flag was later given to open so that it, too, can be used to create files. The creat system call fails if the file already exists. For open, what happens if the file already exists depends upon the use of the flags O_EXCL and O_TRUNC. If O_EXCL is included with the flags (e.g., open(“newfile”, O_CREAT|O_EXCL, 0777)), then, as with creat, the call fails if the file exists. Otherwise, the call succeeds and the (existing) file is opened. If O_TRUNC is included in the flags, then, if the file exists, its previous contents are eliminated and the file (whose size is now zero) is opened. When a file is created by either open or creat, the file’s initial access permissions are the bitwise AND of the mode parameter and the complement of the process’s umask (explained in the next slide).
134 Umask Standard programs create files with “maximum needed permissions” as mode compilers: 0777 editors: 0666 Per-process parameter, umask, used to turn off undesired permission bits e.g., turn off all permissions for others, write permission for group: set umask to 027 compilers: permissions = 0777 & ~(027) = 0750 editors: permissions = 0666 & ~(027) = 0640 set with umask system call or (usually) shell command The umask (often called the “creation mask”) allows programs to have wired into them a standard set of maximum needed permissions as their file-creation modes. Users then have, as part of their environment (via a per-process parameter that is inherited by child processes from their parents), a limit on the permissions given to each of the classes of security principals. This limit (the umask) looks like the 9-bit permissions vector associated with each file, but each one-bit indicates that the corresponding permission is not to be granted. Thus, if umask is set to 022, then, whenever a file is created, regardless of the settings of the mode bits in the open or creat call, read permission for group and others is not to be included with the file’s access permissions. You can determine the current setting of umask by executing the umask shell command without any arguments.
135 What Else? Beyond Sixth-Edition Unix (1975)multiple threads per process how is the process model affected? virtual memory fork? interactive, multimedia user interface scheduling? networking security