introducing vizscorer: a bot advisor to improve your ggplot plots How to measure a good plot? Preparing a training dataset of plots How to train Machine Learning to recognize a good plot? Can Machine learning talk back to humans? Putting all together: vizscorer and the scorer_bot Where to go from here and how to help introducing vizscorer: a bot advisor to improve your ggplot plots One of the most frustrating issues I face in my professional life is the plentitude of ineffective reports generated within my company.
2018
Mac version of updateR function: the UpdateR package how to install the updateR package how to update R version using the updateR package behind the scenes: how updateR works verify that user is running a unix machine get last R version from CRAN run command line commands within R accomplisments and further developments feel free to complain with me I personally really appreciate the InstallR package from Tal galilli, since it lets you install a great number of tools needed for working with R just running a function.
2017
It has being nearly half an year since the last post about workflower was out, why did I stay so silent for that long? I have three major updates to explain the silence: đ Good guys atPackt publishing asked me to write one more book about R and data mining, I suppose this is because the first one was well received đŚ I spend my spare time working on updateR so to get it ready to go on CRAN.
2016
we all know R is the first choice for statistical analysis and data visualisation, but what about big data munging? tidyverse (or we’d better say hadleyverse đ) has been doing a lot in this field, nevertheless it is often the case this kind of activities being handled from some other coding language. Moreover, sometimes you get as an input pieces of analyses performed with other kind of languages or, what is worst, piece of databases packed in proprietary format (like .
this short post is exactly what it seems: a showcase of all ggplot2 themes available within the ggplot2 package. I was doing such a list for myself ( you know that feeling …“how would it look like with this theme? let’s try this one…”) and at the end I thought it could have be useful for my readers. At least this post will save you the time of trying all differents themes just to have a sense of how they look like.
I am really enjoying Uefa Euro 2016 Footbal Competition, even because our national team has done pretty well so far. That’s why after browsing for a while statistics section of official EURO 2016 website I decided to do some analysis on the data they share ( as at the 21th of June). Just to be clear from the beginning: we are not talking of anything too rigourus, but just about some interesting questions with related answers gathered mainly through data visualisation.
Ah, writing a blog post! This is a pleasure I was forgetting, and you can guess it looking at last post date of publication: it was around january... you may be wondering: what have you done along this long time? Well, quite a lot indeed: changed my job ( I am now working @ Intesa Sanpaolo Banking Group on Basel III statistical models) became dad for the third time (and if you are guessing, itâs a boy!
2015
This is not actually a real post but rather a code snippet surrounded by text. Nevertheless I think it is a quite useful one: have you ever found yourself writing a function where a data frame is created, wanting to name that data frame based on a custom argument passed to the function? For instance, the output of your function is a really nice data frame name in a really trivial way, like “result”.
It was around midnight here in Italy: I shared the code on Github, published a post on G+, Linkedin and Twitter and then went to bed. In the next hours things got growing by themselves, with pleasant results like the following: https://twitter.com/DoodlingData/status/635057258888605696 The R community found ramazon a really helpful package. And I actually think it is: Amazon AWS is nowadays one of the most common tools for online web applications and websites hosting.
Because Afraus received a good interest, last month I override shinyapps.io free plan limits. That got me move my Shiny App on an Amazon AWS instance. Well, it was not so straight forward: even if there is plenty of tutorials around the web, every one seems to miss a part: upgrading R version, removing shiny-server examples… And even having all info it is still quite a long, error-prone process.
The last Report to the Nation published by ACFE, stated that on average, fraud accounts for nearly the 5% of companies revenues. on average, fraud accounts for nearly the 5% of companies revenues [![Tweet: on average, fraud accounts for nearly the 5% of companies revenues. http://ctt.ec/u5E6x+](http://clicktotweet.com/img/tweet-graphic-4.png)](http://ctt.ec/q3j4X) Projecting this number for the whole world GDP, it results that the “fraud-country” produces something like a GDP 3 times greater than the Canadian GDP.
As I am currently working on a Fraud Analytics Web Application based on Shiny (currently on beta version, more later on this blog) I found myself asking: wouldn’t be great to add live chat support to my Web Application visitors? It would indeed! [caption id=“attachment_490” align=“aligncenter” width=“200”] an ancient example of chatting - Camera degli Sposi, Andrea Mantegna 1465 -1474[/caption] But how to do it? Unfortunately, looking on Google didn’t give any useful result.
In the early â900 Frank Benford observed that â1â was more frequent as first digit in his own logarithms manual. More than one hundred years later, we can use this curious finding to look for fraud on populations of data. just give a try to the shiny app What âBenfordâs Lawâ stands for? Nice stuff, but what can I do with Benfordâs Law? You can find fraud with it Some precautions BenfordeR: another lean shiny application performing a benford analysis plotting results detecting suspected records Whatâs next In the early â900 Frank Benford observed that â1â was more frequent as first digit in his own logarithms manual.
2014
The main reason why After all,I am still an Internal Auditor. Therefore I often face one of the typical internal auditors problems: understand links between people and companies, in order to discover the existence of hidden communities that could expose the company to unknown risks. the solution: linker In order to address this problem I am developing Linker, a lean shiny app that take 1 to 1 links as an input and gives as output a network map:
If you have a blog you may want to discover how your website is performing for given keywords on Google Search Engine. As we all know, this topic is not a trivial one. Problem is that the analogycal solution would be quite time-consuming, requiring you to search your website for every single keyword, on many many pages. Feeling this way? [caption id=“attachment_273” align=“aligncenter” width=“300”] “Pain and fear, pain and fear for me” - Oliver Twist[/caption]
I reproduce here below principles from the amazing paper Best Practices for Scientific Computing, published on 2012 by a group of US and UK professors. The main purpose of the paper is to “teach” good programming habits shared from professional developers to people that weren’t born developer, and became developers just for professional purposes. Scientists spend an increasing amount of time building and using software. However, most scientists are never taught how to do this efficiently Best Practices for Scientific Computing Write programs for people, not computers.
as part of the** excel functions in R,** I have developed this custom function, reproducing the excel right() function in th R language. Feel free to copy and use it. [code language=“r”] right = function (string, char){ substr(string,nchar(string)-(char-1),nchar(string))} [/code] you can find other function in the Excel functions in R post.
as part of the excel functions in R, I have developed this custom function, emulating the excel left() function in th R language. Feel free to copy and use it. left = function (string,char){ substr(string,1,char)} you can find other function in theExcel functions in R post.
I have started my ”data-journey” from Excel, getting excited by formulas like VLookup(), right() and left(). then datasets got bigger, and I discovered that little spreadsheets were not enough, and look for something bigger and stronger, eventually coming to R. But as you know, ones never forget the first love. So, for fun and for practice, I have written down some of excel functions in R. I hope you will enjoy.
Great tutorial on text mining with twitter byPaeng Angnakoon [youtube=http://youtu.be/mJVcANlkxU8]
Following the post about %in% operator, I received this tweet: https://twitter.com/benwhite21/status/510520550553165824 I gave a look to the code kindly provided by Ben and then I asked myself: I know dplyr is a really nice package, but which snippet is faster? to answer the question I’ve put the two snippets in two functions: #Ben snippet dplyr_snippet =function(object,column,vector){ filter(object,object[,column] %in% vector) } #AC snippet Rbase_snippet =function(object,column,vector){ object[object[,column] %in% vector,] } Then, thanks to the great package microbenchmark, I made a comparison between those two functions, testing the time of execution of both, for 100.
Problem: you haveto subset a data frame using as criteria the exact match of a vector content. for instance: you have a dataset with some attributes, and you have a vector with some values of one of the attributes. You want to make a filter based on the values in the vector. Example: sales records, each record is a deal. The vector is a list of selected customers you are interested in.
I have just saturated all my PC: full is the 4gb RAM and so is the CPU (I7 4770 @3.4 GHZ) Parallel Computation in R which is my secret? the doParallel package for R on mac The package lets you make some very useful parallel computation, giving you the possibility to use all the potentiality of your CPU. As a matter of fact, the standard R option is to use just on of the cores you have got on your PC.
I’ve been recently asked to analyze some Board entertainment expenditures in order to acquire sufficient assurance about their nature and responsible. In response to that request I have developed a little Shiny app with an interesting reactive Bubble chart. The plot, made using ggplot2 package, is composed by: a categorical x value, represented by the clusters identified in the expenditures population A numerical y value, representing the total amount expended Points defined by the total amount of expenditure in the given cluster for each company subject.