Benford's Law: Theory, The General Law Of Relative Quantities, And Forensic Fraud Detection Applications


Book Description

Contrary to common intuition that all digits should occur randomly with equal chances in real data, empirical examinations consistently show that not all digits are created equal, but rather that low digits such as {1, 2, 3} occur much more frequently than high digits such as {7, 8, 9} in almost all data types, such as those relating to geology, chemistry, astronomy, physics, and engineering, as well as in accounting, financial, econometrics, and demographics data sets. This intriguing digital phenomenon is known as Benford's Law.This book gives a comprehensive and in-depth account of all the theoretical aspects, results, causes and explanations of Benford's Law, with a strong emphasis on the connection to real-life data and the physical manifestation of the law. In addition to such a bird's eye view of the digital phenomenon, the conceptual distinctions between digits, numbers, and quantities are explored; leading to the key finding that the phenomenon is actually quantitative in nature; originating from the fact that in extreme generality, nature creates many small quantities but very few big quantities, corroborating the motto 'small is beautiful', and that therefore all this is applicable just as well to data written in the ancient Roman, Mayan, Egyptian, and other digit-less civilizations.Fraudsters are typically not aware of this digital pattern and tend to invent numbers with approximately equal digital frequencies. The digital analyst can easily check reported data for compliance with this digital law, enabling the detection of tax evasion, Ponzi schemes, and other financial scams. The forensic fraud detection section in this book is written in a very concise and reader-friendly style; gathering all known methods and standards in the accounting and auditing industry; summarizing and fusing them into a singular coherent whole; and can be understood without deep knowledge in statistical theory or advanced mathematics. In addition, a digital algorithm is presented, enabling the auditor to detect fraud even when the sophisticated cheater is aware of the law and invents numbers accordingly. The algorithm employs a subtle inner digital pattern within the Benford's pattern itself. This newly discovered pattern is deemed to be nearly universal, being even more prevalent than the Benford phenomenon, as it is found in all random data sets, Benford as well as non-Benford types.




Benford's Law


Book Description

This leads to the key finding that the phenomenon is actually quantitative in nature. Why? The author illustrates that in extreme generality, nature creates many small quantities but very few big quantities, corroborating the motto "small is beautiful", and that therefore all this is applicable just as well to data written in the ancient Roman, Mayan, Egyptian, and other digit-less civilizations. Fraudsters are typically not aware of this digital pattern and tend to invent numbers with approximately equal digital frequencies. The digital analyst can easily check reported data for compliance with this digital law, enabling the detection of tax evasion, Ponzi schemes, and other financial scams.




Benford's Law


Book Description

Benford's law states that the leading digits of many data sets are not uniformly distributed from one through nine, but rather exhibit a profound bias. This bias is evident in everything from electricity bills and street addresses to stock prices, population numbers, mortality rates, and the lengths of rivers. Here, Steven Miller brings together many of the world’s leading experts on Benford’s law to demonstrate the many useful techniques that arise from the law, show how truly multidisciplinary it is, and encourage collaboration. Beginning with the general theory, the contributors explain the prevalence of the bias, highlighting explanations for when systems should and should not follow Benford’s law and how quickly such behavior sets in. They go on to discuss important applications in disciplines ranging from accounting and economics to psychology and the natural sciences. The contributors describe how Benford’s law has been successfully used to expose fraud in elections, medical tests, tax filings, and financial reports. Additionally, numerous problems, background materials, and technical details are available online to help instructors create courses around the book. Emphasizing common challenges and techniques across the disciplines, this accessible book shows how Benford’s law can serve as a productive meeting ground for researchers and practitioners in diverse fields.




Benford's Law


Book Description

A powerful new tool for all forensic accountants, or anyone whoanalyzes data that may have been altered Benford's Law gives the expected patterns of the digits in thenumbers in tabulated data such as town and city populations orMadoff's fictitious portfolio returns. Those digits, in unaltereddata, will not occur in equal proportions; there is a large biastowards the lower digits, so much so that nearly one-half of allnumbers are expected to start with the digits 1 or 2. Thesepatterns were originally discovered by physicist Frank Benford inthe early 1930s, and have since been found to apply to alltabulated data. Mark J. Nigrini has been a pioneer in applyingBenford's Law to auditing and forensic accounting, even before hisgroundbreaking 1999 Journal of Accountancy article introducing thisuseful tool to the accounting world. In Benford's Law, Nigrinishows the widespread applicability of Benford's Law and itspractical uses to detect fraud, errors, and other anomalies. Explores primary, associated, and advanced tests, all describedwith data sets that include corporate payments data and electiondata Includes ten fraud detection studies, including vendor fraud,payroll fraud, due diligence when purchasing a business, and taxevasion Covers financial statement fraud, with data from Enron, AIG,and companies that were the target of hedge fund short sales Looks at how to detect Ponzi schemes, including data on Madoff,Waxenberg, and more Examines many other applications, from the Clinton tax returnsand the charitable gifts of Lehman Brothers to tax evasion andnumber invention Benford's Law has 250 figures and uses 50 interestingauthentic and fraudulent real-world data sets to explain boththeory and practice, and concludes with an agenda and directionsfor future research. The companion website adds additionalinformation and resources.




An Introduction to Benford's Law


Book Description

This book provides the first comprehensive treatment of Benford's law, the surprising logarithmic distribution of significant digits discovered in the late nineteenth century. Establishing the mathematical and statistical principles that underpin this intriguing phenomenon, the text combines up-to-date theoretical results with overviews of the law’s colorful history, rapidly growing body of empirical evidence, and wide range of applications. An Introduction to Benford’s Law begins with basic facts about significant digits, Benford functions, sequences, and random variables, including tools from the theory of uniform distribution. After introducing the scale-, base-, and sum-invariance characterizations of the law, the book develops the significant-digit properties of both deterministic and stochastic processes, such as iterations of functions, powers of matrices, differential equations, and products, powers, and mixtures of random variables. Two concluding chapters survey the finitely additive theory and the flourishing applications of Benford’s law. Carefully selected diagrams, tables, and close to 150 examples illuminate the main concepts throughout. The text includes many open problems, in addition to dozens of new basic theorems and all the main references. A distinguishing feature is the emphasis on the surprising ubiquity and robustness of the significant-digit law. This text can serve as both a primary reference and a basis for seminars and courses.




The Birth of Science


Book Description

This book reveals the multi-generational process involved in humanity's first major scientific achievement, namely the discovery of modern physics, and examines the personal lives of six of the intellectual giants involved. It explores the profound revolution in the way of thinking, and in particular the successful refutation of the school of thought inherited from the Greeks, which focused on the perfection and immutability of the celestial world. In addition, the emergence of the scientific method and the adoption of mathematics as the central tool in scientific endeavors are discussed. The book then explores the delicate thread between pure philosophy, grand unifying theories, and verifiable real-life scientific facts. Lastly, it turns to Kepler’s crucial 3rd law and shows how it was derived from a mere six data points, corresponding to the six planets known at the time. Written in a straightforward and accessible style, the book will inform and fascinate all aficionados of science, history, philosophy, and, in particular, astronomy.




Structural Changes and their Econometric Modeling


Book Description

This book focuses on structural changes and economic modeling. It presents papers describing how to model structural changes, as well as those introducing improvements to the existing before-structural-changes models, making it easier to later on combine these models with techniques describing structural changes. The book also includes related theoretical developments and practical applications of the resulting techniques to economic problems. Most traditional mathematical models of economic processes describe how the corresponding quantities change with time. However, in addition to such relatively smooth numerical changes, economical phenomena often undergo more drastic structural change. Describing such structural changes is not easy, but it is vital if we want to have a more adequate description of economic phenomena – and thus, more accurate and more reliable predictions and a better understanding on how best to influence the economic situation.




Financial Statement Fraud


Book Description

Valuable guidance for staying one step ahead of financial statement fraud Financial statement fraud is one of the most costly types of fraud and can have a direct financial impact on businesses and individuals, as well as harm investor confidence in the markets. While publications exist on financial statement fraud and roles and responsibilities within companies, there is a need for a practical guide on the different schemes that are used and detection guidance for these schemes. Financial Statement Fraud: Strategies for Detection and Investigation fills that need. Describes every major and emerging type of financial statement fraud, using real-life cases to illustrate the schemes Explains the underlying accounting principles, citing both U.S. GAAP and IFRS that are violated when fraud is perpetrated Provides numerous ratios, red flags, and other techniques useful in detecting financial statement fraud schemes Accompanying website provides full-text copies of documents filed in connection with the cases that are cited as examples in the book, allowing the reader to explore details of each case further Straightforward and insightful, Financial Statement Fraud provides comprehensive coverage on the different ways financial statement fraud is perpetrated, including those that capitalize on the most recent accounting standards developments, such as fair value issues.




Corruption and Fraud in Financial Markets


Book Description

Identifying malpractice and misconduct should be top priority for financial risk managers today Corruption and Fraud in Financial Markets identifies potential issues surrounding all types of fraud, misconduct, price/volume manipulation and other forms of malpractice. Chapters cover detection, prevention and regulation of corruption and fraud within different financial markets. Written by experts at the forefront of finance and risk management, this book details the many practices that bring potentially devastating consequences, including insider trading, bribery, false disclosure, frontrunning, options backdating, and improper execution or broker-agency relationships. Informed but corrupt traders manipulate prices in dark pools run by investment banks, using anonymous deals to move prices in their own favour, extracting value from ordinary investors time and time again. Strategies such as wash, ladder and spoofing trades are rife, even on regulated exchanges – and in unregulated cryptocurrency exchanges one can even see these manipulative quotes happening real-time in the limit order book. More generally, financial market misconduct and fraud affects about 15 percent of publicly listed companies each year and the resulting fines can devastate an organisation's budget and initiate a tailspin from which it may never recover. This book gives you a deeper understanding of all these issues to help prevent you and your company from falling victim to unethical practices. Learn about the different types of corruption and fraud and where they may be hiding in your organisation Identify improper relationships and conflicts of interest before they become a problem Understand the regulations surrounding market misconduct, and how they affect your firm Prevent budget-breaking fines and other potentially catastrophic consequences Since the LIBOR scandal, many major banks have been fined billions of dollars for manipulation of prices, exchange rates and interest rates. Headline cases aside, misconduct and fraud is uncomfortably prevalent in a large number of financial firms; it can exist in a wide variety of forms, with practices in multiple departments, making self-governance complex. Corruption and Fraud in Financial Markets is a comprehensive guide to identifying and stopping potential problems before they reach the level of finable misconduct.




The Theory of Perfect Learning


Book Description

The perfect learning exists. We mean a learning model that can be generalized, and moreover, that can always fit perfectly the test data, as well as the training data. We have performed in this thesis many experiments that validate this concept in many ways. The tools are given through the chapters that contain our developments. The classical Multilayer Feedforward model has been re-considered and a novel $N_k$-architecture is proposed to fit any multivariate regression task. This model can easily be augmented to thousands of possible layers without loss of predictive power, and has the potential to overcome our difficulties simultaneously in building a model that has a good fit on the test data, and don't overfit. His hyper-parameters, the learning rate, the batch size, the number of training times (epochs), the size of each layer, the number of hidden layers, all can be chosen experimentally with cross-validation methods. There is a great advantage to build a more powerful model using mixture models properties. They can self-classify many high dimensional data in a few numbers of mixture components. This is also the case of the Shallow Gibbs Network model that we built as a Random Gibbs Network Forest to reach the performance of the Multilayer feedforward Neural Network in a few numbers of parameters, and fewer backpropagation iterations. To make it happens, we propose a novel optimization framework for our Bayesian Shallow Network, called the {Double Backpropagation Scheme} (DBS) that can also fit perfectly the data with appropriate learning rate, and which is convergent and universally applicable to any Bayesian neural network problem. The contribution of this model is broad. First, it integrates all the advantages of the Potts Model, which is a very rich random partitions model, that we have also modified to propose its Complete Shrinkage version using agglomerative clustering techniques. The model takes also an advantage of Gibbs Fields for its weights precision matrix structure, mainly through Markov Random Fields, and even has five (5) variants structures at the end: the Full-Gibbs, the Sparse-Gibbs, the Between layer Sparse Gibbs which is the B-Sparse Gibbs in a short, the Compound Symmetry Gibbs (CS-Gibbs in short), and the Sparse Compound Symmetry Gibbs (Sparse-CS-Gibbs) model. The Full-Gibbs is mainly to remind fully-connected models, and the other structures are useful to show how the model can be reduced in terms of complexity with sparsity and parsimony. All those models have been experimented, and the results arouse interest in those structures, in a sense that different structures help to reach different results in terms of Mean Squared Error (MSE) and Relative Root Mean Squared Error (RRMSE). For the Shallow Gibbs Network model, we have found the perfect learning framework : it is the $(l_1, \boldsymbol{\zeta}, \epsilon_{dbs})-\textbf{DBS}$ configuration, which is a combination of the \emph{Universal Approximation Theorem}, and the DBS optimization, coupled with the (\emph{dist})-Nearest Neighbor-(h)-Taylor Series-Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model [which in turn is a combination of the research of the Nearest Neighborhood for a good Train-Test association, the Taylor Approximation Theorem, and finally the Multivariate Interpolation Method]. It indicates that, with an appropriate number $l_1$ of neurons on the hidden layer, an optimal number $\zeta$ of DBS updates, an optimal DBS learnnig rate $\epsilon_{dbs}$, an optimal distance \emph{dist}$_{opt}$ in the research of the nearest neighbor in the training dataset for each test data $x_i^{\mbox{test}}$, an optimal order $h_{opt}$ of the Taylor approximation for the Perfect Multivariate Interpolation (\emph{dist}-NN-(h)-TS-PMI) model once the {\bfseries DBS} has overfitted the training dataset, the train and the test error converge to zero (0). As the Potts Models and many random Partitions are based on a similarity measure, we open the door to find \emph{sufficient} invariants descriptors in any recognition problem for complex objects such as image; using \emph{metric} learning and invariance descriptor tools, to always reach 100\% accuracy. This is also possible with invariant networks that are also universal approximators. Our work closes the gap between the theory and the practice in artificial intelligence, in a sense that it confirms that it is possible to learn with very small error allowed.