|
內容簡介: |
《高性能嵌入式计算英文版第2版》经过全面更新和扩展,涵盖了现代高性能嵌入式系统设计领域使用的广泛技术。现在智能手机、飞机、汽车、电力设备、医疗设备等许多应用都在使用嵌入式多处理器,所以让系统设计人员理解这些复杂技术必须依赖越来越复杂的硬件、软件和设计方法是非常重要的。
玛里琳·沃尔夫教授采用一种独特的量化方法来论述现代嵌入式计算系统的设计,解释如何定义和实现性能、功耗和成本的量化目标。贯穿全书的实际应用使得本书对专业人员、研究人员和学生来说都是及时且非常有价值的资源。
第2版主要特点:包含全新的一章,讨论信息物理系统CPS——将控制理论和嵌入式计算相结合的新兴智能系统。
讨论嵌入式计算的高级主题。包括针对嵌入式系统的热感知设计、可配置处理器、实时约束和功耗的软件优化、异构多处理器和嵌入式中间件。
深入讨论网络、可重配置系统、软硬件协同设计、安全和程序分析。
|
關於作者: |
玛里琳·沃尔夫MarilynWolf佐治亚理工学院教授,佐治亚研究联合会优秀学者。她分别于1980年、1981年和1984年获得斯坦福大学电子工程学士学位、硕士学位和博士学位。1984年至1989年任职于贝尔实验室。1989年至2007年执教于普林斯顿大学。她是IEEE和ACM会士、IEEE计算机协会核心成员以及ASEE和SPIE成员。她于2003年获得ASEEFrederickE.Terman奖,于2006年获得IEEE电路与系统教育奖。她的研究兴趣主要包括嵌入式计算、嵌入式视频和计算机视觉、VLSI系统。
|
目錄:
|
Preface to the Second Edition
Preface to the First Edition
Acknowledgments
CHAPTER 1 Embedded Computing
1.1. The landscape of high-performance embedded computing
1.2. Cyber-physical systems and embedded computing
1.2.1. Vehicle control and operation
1.2.2. Medical devices and systems
1.2.3. Electric power
1.2.4. Radio and networking
1.2.5. Multimedia
1.3. Design methodologies
1.3.1. Why use design methodologies?
1.3.2. Design goals
1.3.3. Basic design methodologies
1.3.4. Embedded system design flows
1.3.5. Standards-based design methodologies
1.3.6. Design verification and validation
1.3.7. A methodology of methodologies
1.3.8. Joint algorithm and architecture development
1.4. Models of computation
1.4.1. Why study models of computation?
1.4.2. The Turing machine
1.4.3. Stream-oriented models
1.4.4. Representations of state and control
1.4.5. Parallelism and communication
1.4.6. Sources and uses of parallelism
1.5. Reliability. safety. and security
1.5.1. Why reliable embedded systems?
1.5.2. Fundamentals of reliable system design
1.5.3. Novel attacks and countermeasures
1.6. Consumer electronics architectures
1.6.1. Bluetooth
1.6.2. WiFi
1.6.3. Networked consumer devices
1.6.4. High-level services
1.7. Summary and a look ahead
What we learned
Further reading
Questions
Lab exercises
CHAPTER 2 CPUs
2.1. Introduction
2.2. Comping processors
2.2.1. Evaluating processors
2.2.2. ATaxonomy of processors
2.2.3. Embedded vs. general-purpose processors
2.3. RISC processors and digital signal processors
2.3.1. RISC processors
2.3.2. Digital signal processors
2.4. Parallel execution mechanisms
2.4.1. Very long instruction word processors
2.4.2. Superscalar processors
2.4.3. SIMD and vector processors
2.4.4. Thread-level parallelism
2.4.5. GPUs
2.4.6. Processor resource utilization
2.5. Variable-performance CPU architectures
2.5.1. Dynamic voltage and frequency scaling
2.5.2. Reliability and error-aware computing
2.6. Processor memory hierarchy
2.6.1. Memory component models
2.6.2. Register files
2.6.3. Caches
2.6.4. Scratch pad memory
2.7. Encoding and security
2.7.1. Code compression
2.7.2. Code and data compression
2.7.3. Low-power bus encoding
2.7.4. Security
2.8. CPU simulation
2.8.1. Trace-based analysis
2.8.2. Direct execution
2.8.3. Microarchitecture-modeling simulators
2.8.4. Power and thermal simulation and modeling
2.9. Automated CPU design
2.9.1. Configurable processors
2.9.2. Instruction set synthesis
2.10. Summary
What we learned
Further reading
Questions
Lab exercises
CHAPTER 3 Programs
3.1. Introduction
3.2. Code generation and back-end compilation
3.2.1. Models for instructions
3.2.2. Register allocation
3.2.3. Instruction selection and scheduling
3.2.4. Code placement
3.2.5. Programming environments
3.3. Memory-oriented optimizations
3.3.1. Loop transformations
3.3.2. Global optimizations
3.3.3. Buffer. data transfer. and storage management
3.3.4. Cache- and scratch pad-oriented optimizations
3.3.5. Main memory-oriented optimizations
3.4. Program performance analysis
3.4.1. Performance models
3.4.2. Path analysis
3.4.3. Path timing
3.5. Models of computation and programming
3.5.1. Interrupt-oriented languages
3.5.2. Data flow languages
3.5.3. Control-oriented languages
3.5.4. Java
3.5.5. Heterogeneous models of computation
3.6. Summary
What we have learned
Further reading
Questions
Lab exercises
CHAPTER 4 Processes and Operating Systems
4.1. Introduction
4.2. Real-time process scheduling
4.2.1. Preliminaries
4.2.2. Real-time scheduling algorithms
4.2.3. Multi-criticality scheduling
4.2.4. Scheduling for dynamic voltage and frequenc5 scaling
4.2.5. Performance estimation
4.3. Languages and scheduling
4.4. Operating system design
4.4.1. Memory management in embedded operating systems
4.4.2. Structure of a real-time operating system
4.4.3. Operating system overhead
4.4.4. Support for scheduling
4.4.5. Interprocess communication mechanisms
4.4.6. Power management
4.4.7. File systems in embedded devices
4.5. Verification
4.6. Summary
What we have learned
Further reading
Questions
Lab exercises
CHAPTER 5 Multiprocessor Architectures
5.1. Introduction
5.2. Why embedded multiprocessors?
5.2.1. Requirements on embedded systems
5.2.2. Performance and energy
5.2.3. Specialization and multiprocessors
5.2.4. Flexibility and efficiency
5.3. Multiprocessor design techniques
5.3.1. Multiprocessor design methodologies
5.3.2. Multiprocessor modeling and simulation
5.4. Multiprocessor architectures
5.5. Processing elements
5.6. Interconnection networks
5.6.1. Models
5.6.2. Network topologies
5.6.3. Routing and flow control
5.6.4. Networks-on-chips
5.7. Memory systems
5.7.1. Traditional parallel memory systems
5.7.2. Models for memory
5.7.3. Heterogeneous memory systems
5.7.4. Consistent parallel memory systems
5.8. Physically distributed systems and networks
5.8.1. CAN bus
5.8.2. Time-triggered architecture
5.8.3. FlexRay
5.8.4. Aircraft networks
5.9. Multiprocessor design methodologies and algorithms
5.10. Summary
What we have learned
Further reading
Questions
Lab exercises
CHAPTER 6 Multiprocessor Software
6.1. Introduction
6.2. What is different about embedded multiprocessor software?
6.3. Real-time multiprocessor operating systems
6.3.1. Role of the operating system
6.3.2. Multiprocessor scheduling
6.3.3. Scheduling with dynamic tasks
6.4. Services and middleware for embedded multiprocessors
6.4.1. Standards-based services
6.4.2. System-on-chip services
6.4.3. Quality of service
6.5. Design verification
6.6. Summary
What we have learned
Further reading
Questions
Lab exercises
CHAPTER 7 System-Level Design and HardwareSoftware
Co-design
7.1. Introduction
7.2. Performance estimation
7.2.1. High-level synthesis
7.2.2. Accelerator estimation
7.3. Hardwaresoftware co-synthesis algorithms
7.3.1. Program representations
7.3.2. Platform representations
7.3.3. Template-driven synthesis algorithms
7.3.4. Co-synthesis of general multiprocessors
7.3.5. Multi-objective optimization
7.3.6. Control and IO synthesis
7.3.7. Memory systems
7.3.8. Co-synthesis for reconfigurable systems
7.4. Electronic system-level design
7.5. Thermal-aware design
7.8. Reliability
7.7. System-level simulation
7.8. Summary
What we have learned
Further reading
Questions
Lab exercises
CHAPTER 8 Cyber-Physical Systems
8.1. Introduction
8.2. Control theory and systems
8.3. Controlcomputing co-design
8.4. Networked control systems
8.5. Design methodologies
8.5.1. Model-based design
8.5.2. Formal methods
8.8. Security
8.7. Summary
What we have learned
Further reading
Questions
Lab exercises
Glossary
References
Index
|
|