大规模并行处理器程序设计(英文版·原书第3版) 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
大规模并行处理器程序设计(英文版·原书第3版)电子书下载地址
- 文件名
- [epub 下载] 大规模并行处理器程序设计(英文版·原书第3版) epub格式电子书
- [azw3 下载] 大规模并行处理器程序设计(英文版·原书第3版) azw3格式电子书
- [pdf 下载] 大规模并行处理器程序设计(英文版·原书第3版) pdf格式电子书
- [txt 下载] 大规模并行处理器程序设计(英文版·原书第3版) txt格式电子书
- [mobi 下载] 大规模并行处理器程序设计(英文版·原书第3版) mobi格式电子书
- [word 下载] 大规模并行处理器程序设计(英文版·原书第3版) word格式电子书
- [kindle 下载] 大规模并行处理器程序设计(英文版·原书第3版) kindle格式电子书
内容简介:
本书介绍并行编程和GPU架构的基本概念,详细探索了构建并行程序的各种技术,涵盖性能、浮点格式、并行模式和动态并行等主题,适合专业人士及学生阅读。书中通过案例研究展示了开发过程,从计算思维的细节着手,最终给出了高效的并行程序示例。新版更新了关于CUDA的讨论,包含CuDNN等新的库,同时将不再重要的内容移到附录中。新版还增加了关于并行模式的两个新章节,并更新了案例研究,以反映当前的行业实践。
书籍目录:
Preface Acknowledgements
CHAPTER.1 Introduction........................................1
1.1 Heterogeneous Parallel Computing........................................2
1.2 Architecture of a Modern GPU........................................6
1.3 Why More Speed or Parallelism........................................8
1.4 Speeding Up Real Applications........................................10
1.5 Challenges in Parallel Programming ........................................12
1.6 Parallel Programming Languages and Models.............................12
1.7 Overarching Goals........................................14
1.8 Organization of the Book........................................15
References ........................................18
CHAPTER.2 Data Parallel Computing........................................19
2.1 Data Parallelism........................................20
2.2 CUDA C Program Structure........................................22
2.3 A Vector Addition Kernel ........................................25
2.4 Device Global Memory and Data Transfer...................................27
2.5 Kernel Functions and Threading........................................32
2.6 Kernel Launch........................................37
2.7 Summary........................................38
Function Declarations........................................38
Kernel Launch........................................38
Built-in (Predefined) Variables ........................................39
Run-time API........................................39
2.8 Exercises........................................39
References ........................................41
CHAPTER.3 Scalable Parallel Execution........................................43
3.1 CUDA Thread Organization........................................43
3.2 Mapping Threads to Multidimensional Data................................47
3.3 Image Blur: A More Complex Kernel ........................................54
3.4 Synchronization and Transparent Scalability ...............................58
3.5 Resource Assignment........................................60
3.6 Querying Device Properties........................................61
3.7 Thread Scheduling and Latency Tolerance...................................64
3.8 Summary........................................67
3.9 Exercises........................................67
CHAPTER.4 Memory and Data Locality ........................................71
4.1 Importance of Memory Access Efficiency....................................72
4.2 Matrix Multiplication........................................73
4.3 CUDA Memory Types........................................77
4.4 Tiling for Reduced Memory Traffic........................................84
4.5 A Tiled Matrix Multiplication Kernel........................................90
4.6 Boundary Checks........................................94
4.7 Memory as a Limiting Factor to Parallelism................................97
4.8 Summary........................................99
4.9 Exercises........................................100
CHAPTER.5 Performance Considerations........................................103
5.1 Global Memory Bandwidth........................................104
5.2 More on Memory Parallelism........................................112
5.3 Warps and SIMD Hardware........................................117
5.4 Dynamic Partitioning of Resources........................................125
5.5 Thread Granularity........................................127
5.6 Summary........................................128
5.7 Exercises........................................128
References ........................................130
CHAPTER.6 Numerical Considerations ........................................131
6.1 Floating-Point Data Representation........................................132
Normalized Representation of M........................................132
Excess Encoding of E ........................................133
6.2 Representable Numbers........................................134
6.3 Special Bit Patterns and Precision in IEEE Format....................138
6.4 Arithmetic Accuracy and Rounding ........................................139
6.5 Algorithm Considerations........................................140
6.6 Linear Solvers and Numerical Stability......................................142
6.7 Summary........................................146
6.8 Exercises........................................147
References ........................................147
CHAPTER.7 Parallel Patterns: Convolution ........................................149
7.1 Background........................................150
7.2 1D Parallel Convolution—A Basic Algorithm ...........................153
7.3 Constant Memory and Caching........................................156
7.4 Tiled 1D Convolution with Halo Cells.......................................160
7.5 A Simpler Tiled 1D Convolution—General Caching.................165
7.6 Tiled 2D Convolution with Halo Cells.......................................166
7.7 Summary........................................172
7.8 Exercises........................................173
CHAPTER.8 Parallel Patterns: Prefix Sum........................................175
8.1 Background........................................176
8.2 A Simple Parallel Scan........................................177
8.3 Speed and Work Efficiency........................................181
8.4 A More Work-Efficient Parallel Scan........................................183
8.5 An Even More Work-Efficient Parallel Scan..............................187
8.6 Hierarchical Parallel Scan for Arbitrary-Length Inputs..............189
8.7 Single-Pass Scan for Memory Access Efficiency.......................192
8.8 Summary........................................195
8.9 Exercises........................................195
References ........................................196
CHAPTER.9 Parallel Patterns Parallel Histogram Computation .. 199
9.1 Background........................................200
9.2 Use of Atomic Operations ........................................202
9.3 Block versus Interleaved Partitioning........................................206
9.4 Latency versus Throughput of Atomic Operations.....................207
9.5 Atomic Operation in Cache Memory ........................................210
9.6 Privatization........................................210
9.7 Aggregation ........................................211
9.8 Summary........................................213
9.9 Exercises........................................213
Reference........................................214
CHAPTER.10 Parallel Patterns: Sparse Matrix Computation ...........215
10.1 Background........................................216
10.2 Parallel SpMV Using CSR........................................219
10.3 Padding and Transposition........................................221
10.4 Using a Hybrid Approach to Regulate Padding.......................224
10.5 Sorting and Partitioning for Regularization.............................227
10.6 Summary........................................229
10.7 Exercises........................................229
References ........................................230
CHAPTER.11 Parallel Patterns: Merge Sort........................................231
11.1 Background........................................231
11.2 A Sequential Merge Algorithm........................................233
11.3 A Parallelization Approach........................................234
11.4 Co-Rank Function Implementation........................................236
Contents
11.5 A Basic Parallel Merge Kernel ........................................241
11.6 A Tiled Merge Kernel........................................242
11.7 A Circular-Buffer Merge Kernel........................................249
11.8 Summary........................................256
11.9 Exercises........................................256
Reference........................................256
CHAPTER.12 Parallel Patterns: Graph Search......................................257
12.1 Background........................................258
12.2 Breadth-First Search ........................................260
12.3 A Sequential BFS Function ........................................262
12.4 A Parallel BFS Function........................................265
12.5 Optimizations........................................270
Memory Bandwidth........................................270
Hierarchical Queues ........................................271
Kernel Launch Overhead........................................272
Load Balance........................................273
12.6 Summary........................................273
12.7 Exercises........................................273
References ........................................274
CHAPTER.13 CUDA Dynamic Parallelism........................................275
13.1 Background........................................276
13.2 Dynamic Parallelism Overview ........................................278
13.3 A Simple Example........................................279
13.4 Memory Data Visibility........................................281
Global Memory ........................................281
Zero-Copy Memory........................................282
Constant Memory........................................282
Local Memory........................................282
Shared Memory........................................283
Texture Memory........................................283
13.5 Configurations and Memory Management ..............................283
Launch Environment Configuration........................................283
Memory Allocation and Lifetime........................................283
Nesting Depth........................................284
Pending Launch Pool Configuration .......................................284
Errors and Launch Failures........................................284
13.6 Synchronization, Streams, and Events.....................................285
Synchronization........................................285
Synchronization Depth........................................285
Streams ........................................286
Events ........................................287
13.7 A More Complex Example........................................287
Linear Bezier Curves........................................288
Quadratic Bezier Curves........................................288
Bezier Curve Calculation (Without Dynamic Parallelism) .....288
Bezier Curve Calculation (With Dynamic Parallelism) ..........290
Launch Pool Size........................................292
Streams ........................................292
13.8 A Recursive Example........................................293
13.9 Summary........................................297
13.10 Exercises........................................299
References ........................................301
A13.1 Code Appendix........................................301
CHAPTER.14 Application Case Study—non-Cartesian Magnetic Resonance Imaging............................305
14.1 Background........................................306
14.2 Iterative Reconstruction........................................308
14.3 Computing FHD ........................................310
Step 1: Determine the Kernel Parallelism Structure................312
Step 2: Getting Around the Memory Bandwidth Limitation...317
Step 3: Using Hardware Trigonometry Functions...........323
Step 4: Experimental Performance Tuning..............................326
14.4 Final Evaluation........................................327
14.5 Exercises........................................328
References ........................................329
CHAPTER.15 Application Case Study—Molecular Visualization and Analysis ....................................331
15.1 Background........................................332
15.2 A Simple Kernel Implementation........................................333
15.3 Thread Granularity Adjustment ........................................337
15.4 Memory Coalescing........................................338
15.5 Summary........................................342
15.6 Exercises........................................343
References ........................................344
CHAPTER.16 Application Case Study—Machine Learning ..............345
16.1 Background........................................346
16.2 Convolutional Neural Networks ........................................347
ConvNets: Basic Layers........................................348
ConvNets: Backpropagation........................................351
16.3 Convolutional Layer: A Basic CUDA Implementation of Forward Propagation.............................355
16.4 Reduction of Convolutional Layer to Matrix Multiplication........................................359
16.5 cuDNN Library........................................364
16.6 Exercises........................................366
References ........................................367
CHAPTER.17 Parallel Programming and Computational Thinking ........................................369
17.1 Goals of Parallel Computing........................................370
17.2 Problem Decomposition........................................371
17.3 Algorithm Selection........................................374
17.4 Computational Thinking........................................379
17.5 Single Program, Multiple Data, Shared Memory and Locality ...................................380
17.6 Strategies for Computational Thinking....................................382
17.7 A Hypothetical Example: Sodium Map of the Brain...............383
17.8 Summary........................................386
17.9 Exercises........................................386
References ........................................386
CHAPTER.18 Programming a Heterogeneous Computing Cluster ........................................387
18.1 Background........................................388
18.2 A Running Example........................................388
18.3 Message Passing Interface Basics........................................391
18.4 Message Passing Interface Point-to-Point Communication.....393
18.5 Overlapping Computation and Communication......................400
18.6 Message Passing Interface Collective Communication...........408
18.7 CUDA-Aware Message Passing Interface ...............................409
18.8 Summary........................................410
18.9 Exercises........................................410
Reference........................................411
CHAPTER.19 Parallel Programming with OpenACC.............................413
19.1 The OpenACC Execution Model........................................414
19.2 OpenACC Directive Format........................................416
19.3 OpenACC by Example........................................418
The OpenACC Kernels Directive........................................419
The OpenACC Parallel Directive ........................................422
Comparison of Kernels and Parallel Directives.......................424
OpenACC Data Directives........................................425
OpenACC Loop Optimizations........................................430
OpenACC Routine Directive........................................432
Asynchronous Computation and Data.....................................434
19.4 Comparing OpenACC and CUDA........................................435
Portability ........................................435
Performance........................................436
Simplicity ........................................436
19.5 Interoperability with CUDA and Libraries..............................437
Calling CUDA or Libraries with OpenACC Arrays................437
Using CUDA Pointers in OpenACC .......................................438
Calling CUDA Device Kernels from OpenACC.....................439
19.6 The Future of OpenACC........................................440
19.7 Exercises........................................441
CHAPTER.20 More on CUDA and Graphics Processing Unit Computing........................................443
20.1 Model of Host/Device Interaction........................................444
20.2 Kernel Execution Control ........................................449
20.3 Memory Bandwidth and Compute Throughput.......................451
20.4 Programming Environment........................................453
20.5 Future Outlook........................................455
References ........................................456
CHAPTER.21 Conclusion and Outlook........................................457
21.1 Goals Revisited........................................457
21.2 Future Outlook........................................458
Appendix A: An Introduction to OpenCL........................................461
Appendix B: THRUST: a Productivity-oriented Library for CUDA.....................475
Appendix C: CUDA Fortran........................................493
Appendix D: An introduction to C++ AMP........................................515
Index ........................................535
作者介绍:
[美]大卫·B. 柯克(David B. Kirk) 胡文美(Wen-mei W. Hwu) 著:大卫·B. 柯克(David B. Kirk) 美国国家工程院院士,NVIDIA Fellow,曾任NVIDIA公司首席科学家。他领导了NVIDIA图形技术的开发,并且是CUDA技术的创始人之一。2002年,他荣获ACM SIGGRAPH计算机图形成就奖,以表彰其在把高性能计算机图形系统推向大众市场方面做出的杰出贡献。他拥有加州理工学院计算机科学博士学位。
胡文美(Wen-mei W. Hwu) 美国伊利诺伊大学厄巴纳-香槟分校电气与计算机工程系AMD Jerry Sanders讲席教授,并行计算研究中心首席科学家,领导IMPACT团队和CUDA卓越中心的研究工作。他在编译器设计、计算机体系结构、微体系结构和并行计算方面做出了卓越贡献,是IEEE Fellow、ACM Fellow,荣获了包括ACM SigArch Maurice Wilkes Award在内的众多奖项。他还是MulticoreWare公司的联合创始人兼CTO。他拥有加州大学伯克利分校计算机科学博士学位。
出版社信息:
暂无出版社相关信息,正在全力查找中!
书籍摘录:
暂无相关书籍摘录,正在全力查找中!
在线阅读/听书/购买/PDF下载地址:
在线阅读地址:大规模并行处理器程序设计(英文版·原书第3版)在线阅读
在线听书地址:大规模并行处理器程序设计(英文版·原书第3版)在线收听
在线购买地址:大规模并行处理器程序设计(英文版·原书第3版)在线购买
原文赏析:
暂无原文赏析,正在全力查找中!
其它内容:
书籍介绍
本书介绍并行编程和GPU架构的基本概念,详细探索了构建并行程序的各种技术,涵盖性能、浮点格式、并行模式和动态并行等主题,适合专业人士及学生阅读。书中通过案例研究展示了开发过程,从计算思维的细节着手,最终给出了高效的并行程序示例。新版更新了关于CUDA的讨论,包含CuDNN等新的库,同时将不再重要的内容移到附录中。新版还增加了关于并行模式的两个新章节,并更新了案例研究,以反映当前的行业实践。
网站评分
书籍多样性:8分
书籍信息完全性:6分
网站更新速度:7分
使用便利性:9分
书籍清晰度:6分
书籍格式兼容性:3分
是否包含广告:4分
加载速度:4分
安全性:9分
稳定性:5分
搜索功能:4分
下载便捷性:4分
下载点评
- 体验好(401+)
- 藏书馆(572+)
- 五星好评(439+)
- 收费(562+)
- 体验满分(493+)
- 不亏(284+)
- 中评多(664+)
- 强烈推荐(60+)
下载评价
- 网友 宫***凡: ( 2025-01-05 10:50:57 )
一般般,只能说收费的比免费的强不少。
- 网友 印***文: ( 2025-01-12 19:47:53 )
我很喜欢这种风格样式。
- 网友 戈***玉: ( 2024-12-24 19:00:30 )
特别棒
- 网友 融***华: ( 2025-01-13 03:06:24 )
下载速度还可以
- 网友 后***之: ( 2024-12-29 10:12:06 )
强烈推荐!无论下载速度还是书籍内容都没话说 真的很良心!
- 网友 养***秋: ( 2024-12-23 22:17:50 )
我是新来的考古学家
- 网友 国***舒: ( 2025-01-08 11:23:21 )
中评,付点钱这里能找到就找到了,找不到别的地方也不一定能找到
- 网友 索***宸: ( 2025-01-14 13:10:44 )
书的质量很好。资源多
- 网友 田***珊: ( 2025-01-16 15:15:18 )
可以就是有些书搜不到
- 网友 訾***雰: ( 2025-01-18 18:43:03 )
下载速度很快,我选择的是epub格式
- 网友 晏***媛: ( 2024-12-27 07:59:28 )
够人性化!
- 网友 相***儿: ( 2025-01-01 15:48:56 )
你要的这里都能找到哦!!!
- 网友 常***翠: ( 2025-01-12 13:01:50 )
哈哈哈哈哈哈
- 网友 游***钰: ( 2025-01-10 20:18:37 )
用了才知道好用,推荐!太好用了
- 网友 敖***菡: ( 2025-01-19 22:23:33 )
是个好网站,很便捷
- 风湿病学住院医师手册 左晓霞 主编 科技文献出版社,【正版可开发票】 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
- 更高更险的山(探索地球拉达·巴卡尔长江文艺出版社9787570222247 登山运动少儿读物学龄前儿童蔚蓝书店 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
- 20世纪公法学大师 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
- 黄帝阴符经集注 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
- 小棉被(精) 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
- 点 浆 台:浆细胞疑难病例精粹 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
- 象棋从入门新手到实战高手(全5册)残局破解+杀法技巧+中局战术+布局攻略+名局观战 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
- 高玉宝 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
- 冰河时代非凡巨兽 未来出版社 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
- 网络营销实务 下载 pdf 百度网盘 epub 免费 2025 电子书 mobi 在线
书籍真实打分
故事情节:9分
人物塑造:3分
主题深度:3分
文字风格:4分
语言运用:4分
文笔流畅:4分
思想传递:3分
知识深度:6分
知识广度:6分
实用性:6分
章节划分:7分
结构布局:3分
新颖与独特:7分
情感共鸣:5分
引人入胜:4分
现实相关:9分
沉浸感:3分
事实准确性:6分
文化贡献:4分