营口理工学院图书馆书目检索系统

| 暂存书架(0) | 登录

MARC状态:已编 文献类型:西文图书 浏览次数:129

题名/责任者:
Design of digital phase shifters for multipurpose communication systems : with MATLAB design and analysis programs / Binboga Siddik Yarman.
版本说明:
Second edition.
出版发行项:
Gistrup, Denmark : River Publishers, 2022.
ISBN:
9788770223812
载体形态项:
652 pages ; 25 cm.
丛编说明:
River Publishers series in communications
个人责任者:
Yarman, Binboga Siddik, author.
论题主题:
Mobile communication systems.
论题主题:
Digital communications.
论题主题:
Phase shifters-Design and construction.
中图法分类号:
TN929.5
书目附注:
Includes bibliographical references and index.
内容附注:
Preface xiii List of Figures xxiii List of Tables xxix List of Abbreviations xxxi Appendix A. List of Algorithms xxxvii Appendix D. List of Definitions xxxix Appendix E. List of Examples xli Appendix L. List of Lemmas and Theorems xlv Appendix V. List of Video Links xlvi I The Art and Science of Clustering 1 1 Clusters: The Human Point of View (HPOV) 3 1.1 Introduction 3 1.2 What are Clusters? 3 1.3 Notes and Remarks 9 1.4 Exercises 10 2 Uncertainty: Fuzzy Sets and Models 13 2.1 Introduction 13 2.2 Fuzzy Sets and Models 14 2.3 Fuzziness and Probability 18 2.4 Notes and Remarks 22 2.5 Exercises 22 3 Clusters: The Computer Point of View (CPOV) 25 3.1 Introduction 25 3.2 Label Vectors 26 3.3 Partition Matrices 28 3.4 How Many Clusters are Present in a Data Set? 30 3.5 CPOV Clusters: The Computer's Point of View 31 3.6 Notes and Remarks 33 3.7 Exercises 33 4 The Three Canonical Problems 35 4.1 Introduction 35 4.2 Tendency Assessment - (Are There Clusters?) 35 4.2.1 An Overview of Tendency Assessment 36 4.2.2 Minimal Spanning Trees (MSTs) 37 4.2.3 Visual Assessment of Clustering Tendency 41 4.2.4 The VAT and iVAT Reordering Algorithms 46 4.3 Clustering (Partitioning the Data into Clusters) 48 4.4 Cluster Validity (Which Clusters are "Best"?) 55 4.5 Notes and Remarks 61 4.6 Exercises 66 5 Feature Analysis 69 5.1 Introduction 69 5.2 Feature Nomination 70 5.3 Feature Analysis 73 5.4 Feature Selection 75 5.5 Feature Extraction 81 5.5.1 Principal Components Analysis 82 5.5.2 Random Projection 84 5.5.3 Sammon's Algorithm 87 5.5.4 Autoencoders 89 5.5.5 Relational Data 91 5.6 Normalization and Statistical Standardization 92 5.7 Notes and Remarks 95 5.8 Exercises 101 II Four Basic Models and Algorithms 105 6 The c-Means (aka k-Means) Models 109 6.1 Introduction 109 6.2 The Geometry of Partition Spaces 109 6.3 The HCM/FCM Models and Basic AO Algorithms 120 6.4 Cluster Accuracy for Labeled Data 130 6.5 Choosing Model Parameters (c, m, * A) 136 6.5.1 How to Pick the Number of Clusters c 137 6.5.2 How to Pick the Weighting Exponent m 138 6.5.3 Choosing the Weight Matrix (A) for the Model Norm 142 6.6 Choosing Execution Parameters (V0, ", * err,T) 145 6.6.1 Choosing Termination and Iterate Limit Criteria 146 6.6.2 How to Pick an Initial V0 (or U0) 148 6.6.3 Acceleration Schemes for HCM (aka k-Means) and (FCM) 150 6.7 Cluster Validity With the Best c Method 161 6.7.1 Scale Normalization 162 6.7.2 Statistical Standardization 162 6.7.3 Stochastic Correction for Chance 162 6.7.4 Best c Validation With Internal CVIs 163 6.7.5 Crisp Cluster Validity Indices 164 6.7.6 Soft Cluster Validity Indices 168 6.8 Alternate Forms of Hard c-Means (aka k-Means) 176 6.8.1 Bounds on k-Means in Randomly Projected Downspaces 176 6.8.2 Matrix Factorization for HCM for Clustering 178 6.8.3 SVD: A Global Bound for J1 (U, V; X) 179 6.9 Notes and Remarks 184 6.10 Exercises 189 7 Probabilistic Clustering - GMD/EM 193 7.1 Introduction 193 7.2 The Mixture Model 194 7.3 The Multivariate Normal Distribution 196 7.4 Gaussian Mixture Decomposition 202 7.5 The Basic EM Algorithm for GMD 204 7.6 Choosing Model and Execution Parameters for EM 209 7.6.1 Estimating c With iVAT 209 7.6.2 Choosing Q0 or P0 in GMD 211 7.6.3 Implementation Parameters ", * err,T for GMD With EM 212 7.6.4 Acceleration Schemes for GMD With EM 214 7.7 Model Selection and Cluster Validity for GMD 215 7.7.1 Two Interpretations of the Objective of GMD 216 7.7.2 Choosing the Number of Components Using GMD/EM With GOFIs 218 7.7.3 Choosing the Number of Clusters Using GMD/EM With CVIs 220 7.8 Notes and Remarks 221 7.9 Exercises 223 8 Relational Clustering - The SAHN Models 225 8.1 Relations and Similarity Measures 225 8.2 The SAHN Model and Algorithms 230 8.3 Choosing Model Parameters for SAHN Clustering 234 8.4 Dendrogram Representation of SAHN Clusters 237 8.5 SL Implemented With Minimal Spanning Trees 239 8.5.1 The Role of the MST in Single Linkage Clustering 239 8.5.2 SL Compared to a Fitch-Margoliash Dendrogram 245 8.5.3 Repairing SL Sensitivity to Inliers and Bridge Points 252 8.5.4 Acceleration of the Single Linkage Algorithm 255 8.6 Cluster Validity for Single Linkage 256 8.7 An Example Using All Four Basic Models 258 8.8 Notes and Remarks 262 8.9 Exercises 264 9 Properties of the Fantastic Four: External Cluster Validity 265 9.1 Introduction 265 9.2 Computational Complexity 266 9.2.1 Using Big-Oh to Measure the Growth of Functions 266 9.2.2 Time and Space Complexity for the Fantastic Four 270 9.3 Customizing the c-Means Models to Account for Cluster Shape 271 9.3.1 Variable Norm Methods 271 9.3.2 Variable Prototype Methods 273 9.4 Traversing the Partition Landscape 277 9.5 External Cluster Validity With Labeled Data 279 9.5.1 External Paired-Comparison Cluster Validity Indices 280 9.5.2 External Best Match (Best U, or Best E) Validation 287 9.5.3 The Fantastic Four Use Best E Evaluations on Labeled Data 287 9.6 Choosing an Internal CVI Using Internal/External (Best I/E) Correlation 296 9.7 Notes and Remarks 299 9.8 Problems 300 10 Alternating Optimization 301 10.1 Introduction 301 10.2 General Considerations on Numerical Optimization 301 10.2.1 Iterative Solution of Optimization Problems 301 10.2.2 Iterative Solution of Alternating Optimization with (t, s) Schemes 303 10.3 Local Convergence Theory for AO 307 10.4 Global Convergence Theory 314 10.5 Impact of the Theory for the c-Means Models 316 10.6 Convergence for GMD Using EM/AO 318 10.7 Notes and Remarks 319 10.8 Exercises 322 11 Clustering in Static Big Data 325 11.1 The Jungle of Big Data 325 11.1.1 An Overview of Big Data 326 11.1.2 Scalability vs. Acceleration 327 11.2 Methods for Clustering in Big Data 329 11.3 Sampling Functions 334 11.3.1 Chunk Sampling 335 11.3.2 Random Sampling 335 11.3.3 Progressive Sampling 336 11.3.4 Maximin (MM) Sampling 340 11.3.5 Aggregation and Non-Iterative Extension of a Literal Partition to the Rest of the Data 348 11.4 A Sampler of Other Methods: Precursors to Streaming Data Analysis 352 11.5 Visualization of Big Static Data 360 11.6 Extending Single Linkage for Static Big Data 365 11.7 Notes and Remarks 372 11.8 Exercises 378 12 Structural Assessment in Streaming Data 379 12.1 Streaming Data Analysis 380 12.1.1 The Streaming Process 380 12.1.2 Computational Footprints 385 12.2 Streaming Clustering Algorithms 386 12.2.1 Sequential Hard c-Means and Sebestyen's Method 387 12.2.2 Extensions of Sequential Hard c-Means: BIRCH, CluStream, and DenStream 389 12.2.3 Model-Based Algorithms 393 12.2.4 Projection and Grid-Based Methods 396 12.3 Reading the Footprints: Hindsight Evaluation 398 12.3.1 When You Can See the Data and Footprints 398 12.3.2 When You Can't See the Data and Footprints 402 12.3.3 Change Point Detection 405 12.4 Dynamic Evaluation of Streaming Data Analysis 408 12.4.1 Incremental Stream Monitoring Functions (ISMFs) 408 12.4.2 Visualization of Streaming Data 419 12.5 What's Next for Streaming Data Analysis? 430 12.6 Notes and Remarks 434 12.7 Exercises 436 References 439 Index 461 About the Author 467.
摘要附注:
The availability of packaged clustering programs means that anyone with data can easily do cluster analysis on it. But many users of this technology don't fully appreciate its many hidden dangers. In today's world of "grab and go algorithms," part of my motivation for writing this book is to provide users with a set of cautionary tales about cluster analysis, for it is very much an art as well as a science, and it is easy to stumble if you don't understand its pitfalls. Indeed, it is easy to trip over them even if you do! The parenthetical word usually in the title is very important, because all clustering algorithms can and do fail from time to time. Modern cluster analysis has become so technically intricate that it is often hard for the beginner or the non-specialist to appreciate and understand its many hidden dangers. Here's how Yogi Berra put it, and he was right: In theory there's no difference between theory and practice. In practice, there is Yogi Berra This book is a step backwards, to four classical methods for clustering in small, static data sets that have all withstood the tests of time. The youngest of the four methods is now almost 50 years old: Gaussian Mixture Decomposition (GMD, 1898) SAHN Clustering (principally single linkage (SL, 1909)) Hard c-means (HCM, 1956, also widely known as (aka) "k-means") Fuzzy c-means (FCM, 1973, reduces to HCM in a certain limit) The dates are the first known writing (to me, anyway) about these four models. I am (with apologies to Marvel Comics) very comfortable in calling HCM, FCM, GMD and SL the Fantastic Four. Cluster analysis is a vast topic. The overall picture in clustering is quite overwhelming, so any attempt to swim at the deep end of the pool in even a very specialized subfield requires a lot of training. But we all start out at the shallow end (or at least that's where we should start!), and this book is aimed squarely at teaching toddlers not to be afraid of the water. There is no section of this book that, if explored in real depth, cannot be expanded into its own volume. So, if your needs are for an in-depth treatment of all the latest developments in any topic in this volume, the best I can do - what I will try to do anyway - is lead you to the pool, and show you where to jump in.
全部MARC细节信息>>
索书号 条码号 年卷期 馆藏地 书刊状态 还书位置
TN929.5/Y28 350000933   荟文堂     可借 荟文堂
显示全部馆藏信息
借阅趋势

同名作者的其他著作(点击查看)
用户名:
密码:
验证码:
请输入下面显示的内容
  证件号 条码号 Email
 
姓名:
手机号:
送 书 地:
收藏到: 管理书架