Month: May 2017

An Introduction to Modern Econometrics Using Stata

COVER-INTRO-ECONOMETRIC-STATAThis book is a wonderful complement to the Stata technical manuals. It provides a wealth of practical tips and sample applications that help the intermediate-level Stata user advance in making the most efficient use of Stata. It is thoughtfully organized along the lines of an econometrics textbook, allowing practitioners to find relevant and useful commands, procedures, and examples by topics that are familiar and immediate. dl-red It also includes a most helpful appendix for novice programmers that will expedite their development into proficient Stata programmers. This book is a must-have reference for any organization that needs to train practitioners of econometrics in the use of Stata. (Peter Boberg, CRA International)

Advertisements

Seamless R and C++ Integration with Rcpp

COVER-inset-aRcpp is an R add-on package which facilitates extending R with C++ functions. It is being used for anything from small and quickly constructed add-on functions written either to fluidly experiment with something new or to accelerate computing by replacing an R function with its C++ equivalent to large-scale bindings for existing libraries, or as a building block in entirely new research computing environments. While still relatively new as a project, Rcpp has already become widely deployed among users and developers in the R community. Rcpp is now the most popular language extension for the R system and used by over 100 CRAN packages as well as ten BioConductor packages. download-buttons-75

This books aims to provide a solid introduction to Rcpp.

This book is for R users who would like to extend R with C++ code. Some familiarity with R is certainly helpful; a number of other books can provide refreshers or specific introductions. C++ knowledge is also helpful, though not strictly required. An appendix provides a very brief introduction for C++ to those familiar only with the R language. The book should also be helpful to those coming to R with more of a C++ programming background. However, additional background readingmay be required to obtain a firmer grounding in R itself. Chambers (2008) is a good introduction to the philosophy behind the R system and a helpful source in order to acquire a deeper understanding. There may also be some readers who would like to see how Rcpp works internally. Covering that aspect, however, requires a fairly substantial C++ content and is not what this book is trying to provide. The focus of this book is clearly on how to use Rcpp.

DATA MINING: Practical Machine Learning Tools and Techniques.

COVER-(147X247)Technology now allows us to capture and store vast quantities of data. Finding patterns, trends, and anomalies in these datasets, and summarizing them with simple quantitative models, is one of the grand challenges of the information age—turning data into information and turning information into knowledge. download-buttons-75

This book presents this new discipline in a very accessible form: as a text both to train the next generation of practitioners and researchers and to inform lifelong learners like myself. Witten and Frank have a passion for simple and elegant solutions. They approach each topic with this mindset, grounding all concepts in concrete examples, and urging the reader to consider the simple techniques first, and then progress to the more sophisticated ones if the simple ones prove inadequate.

If you are interested in databases, and have not been following the machine learning field, this book is a great way to catch up on this exciting progress. If you have data that you want to analyze and understand, this book and the associated Weka toolkit are an excellent way to start.

The Data Warehouse ETL Toolkit

cover-(187x247)Practical Techniques for Extracting, Cleaning, Conforming, and Delivering Data.

The Extract-Transform-Load (ETL) system is the foundation of the data warehouse. A properly designed ETL system extracts data from the source systems, enforces data quality and consistency standards, conforms data so that separate sources can be used together, and finally delivers data in a presentation-ready format so that application developers can build applications and end users can make decisions. This book is organized around these four steps.download-buttons-75

The ETL system makes or breaks the data warehouse. Although building the ETL system is a back room activity that is not very visible to end users, it easily consumes 70 percent of the resources needed for implementation and maintenance of a typical data warehouse.

The ETL system adds significant value to data. It is far more than plumbing for getting data out of source systems and into the data warehouse. Specifically, the ETL system:

  • Removes mistakes and corrects missing data
  • Provides documented measures of confidence in data
  • Captures the flow of transactional data for safekeeping
  • Adjusts data from multiple sources to be used together
  • Structures data to be usable by end-user tools

ETL is both a simple and a complicated subject. Almost everyone understands the basic mission of the ETL system: to get data out of the source and load it into the data warehouse. And most observers are increasingly appreciating the need to clean and transform data along the way. So much for the simple view. It is a fact of life that the next step in the design of the ETL system breaks into a thousand little subcases, depending on your own weird data sources, business rules, existing software, and unusual destination-reporting applications. The challenge for all of us is to tolerate the thousand little subcases but to keep perspective on the simple overall mission of the ETL system. Please judge this book by how well we meet this challenge!

The DataWarehouse ETL Toolkit is a practical guide for building successful ETL systems. This book is not a survey of all possible approaches! Rather, we build on a set of consistent techniques for delivery of dimensional data. Dimensional modeling has proven to be the most predictable and cost effective approach to building data warehouses. At the same time, because the dimensional structures are the same across many data warehouses, we can count on reusing code modules and specific development logic.

This book is a roadmap for planning, designing, building, and running the back room of a data warehouse.We expand the traditional ETL steps of extract, transform, and load into the more actionable steps of extract, clean, conform, and deliver, although we resist the temptation to change ETL into ECCD!

In this book, you’ll learn to:

  • Plan and design your ETL system
  • Choose the appropriate architecture from the many possible choices
  • Manage the implementation
  • Manage the day-to-day operations
  • Build the development/test/production suite of ETL processes
  • Understand the tradeoffs of various back-room data structures, including flat files, normalized schemas, XML schemas, and star join (dimensional) schemas
  • Analyze and extract source data
  • Build a comprehensive data-cleaning subsystem
  • Structure data into dimensional schemas for the most effective delivery to end users, business-intelligence tools, data-mining tools,
  • OLAP cubes, and analytic applications
  • Deliver data effectively both to highly centralized and profoundly distributed data warehouses using the same techniques
  • Tune the overall ETL process for optimum performance