Creating Maps on the Fly For UIL Realignment

This morning the high school football season officially started with the release of the much anticipated 2014-2016 UIL Football Alignments. This usually started with the UIL servers crashing due to the high volume of traffic (it did briefly, prior to release). This year the UIL was prepared and had a back-up plan to divert traffic off their site.

So at exactly 9:00 am, the Twitterverse was alive with the ramblings of everyone who cares about Texas high school football.

I downloaded the files and immediately started sorting the teams into an Excel spreadsheet I had prepared for the occasion. Once done, I placed the data into my main database online.

I had been modifying my map code I created and showed in a previous post to handle the different divisions, sort them by division and district and color code them. By recycling this code, I easily created three maps.

Here’s the one where I sort them by division (blue for d1, red for d2)
http://sixmanfootball.com/big_alignment_map.php

Here’s a look at it

Then here are the ones where I separate Division 1 and 2 and then split up the districts.
http://sixmanfootball.com/alignment_map.php?did=1
http://sixmanfootball.com/alignment_map.php?did=2

Here’s an example shot of what they look like

Using the new google maps API, it took less than an hour to get everything up and running. All that was left was a little formatting and fine-tuning. The conversion of the data to an XML file makes all of the debugging so much easier.

Creating a simple command line streaming twitter search engine using node.js

About two weeks ago I published an article on Texas fan sentiment analysis, based on over 50,000 tweets I collected the day of the Valero Alamo Bowl.

This was fairly straightforward, as I utilized the code my colleague Taylor Smith created and modified it for my purposes. My biggest changes came with how I analyzed the data.

The problem I had was that the process of obtaining the tweets tied up my R console. This was problematic because I could neither use R, nor start looking at the data. Another problem was I had to determine up front how long I wanted to run the search. I could kill the process, but if the game ran past the time I had set, I would have to rush and restart it again.

I prefer using the command line on things like this. I don’t use the command line for too much, so I knew it would at least free up my software. Last summer, before twitter changed their OAuth requirements, you could login very easily via the command line.

Obviously that has changed. With the latest OAuth, logging in directly from the command line is not really an option. You need a registered application to make this happen.

This fall, Elizabeth Winkler at Mass Relevance mentioned an open source package called ‘t’ that runs using Ruby. I set this up, but don’t really know enough Ruby to get much further than simple tweeting and a basic search from the command line.

What I really wanted was a way to run command line searches that could utilize the streaming API and parse them to a nice JSON file. I am fairly certain t does that, but like I said, my Ruby skills aren’t the best.

Talking this over with my brother-in-law at lunch on Sunday, he figured there had to be a good way to do this with node.js. I have a little experience in node.js and JavaScript is fairly forgiving, so I started looking into it.

I found several node.js modules, such as ‘twit’ and ‘twitter’. Both are open source and easily found on GitHub. I experimented with both and came up with a simple method for running a search using twit (basically modifying a bit of their sample code).

Getting Started
First make sure you have installed node.js on your computer

Then go to https://github.com/ttezel/twit and follow the instructions on installing twit.

Next you need to register for a twitter account. I read one article this week where a guy suggests registering a separate twitter account just for twitter searches and experiments. That way you don’t get your personal account banned by accident. I didn’t do this.

Once you have an account, you need to then go https://dev.twitter.com and sign in as a developer to register an application.

After logging in, go to the drop-down menu at the top-right (where your profile icon is) and go to ‘My Applications’.

Create a new application. This is where you get your consumer key and Consumer Secret. (make sure you save these for easy reference in a minute)

Now that you have those, you need to click the button at the bottom of the page and ‘Create My Access Token’. Here’s where you get the Access Token Key and Access Token Secret. (again, make sure you save these for easy reference in a minute)

I am going to assume you are logged into your terminal. Node and node_modules should be installed in your main directory. If so, you can change directories

cd node_modules/twit/examples (honestly, you can put this anywhere, but I figured why not keep it with the other twit code?)

create a file mysearch.js (vi mysearch.js and paste the code below)


var Twit = require('twit')
var T = new Twit({
consumer_key: 'put yours here'
, consumer_secret: ‘put yours here’
, access_token: ‘put yours here '
, access_token_secret: 'put yours here '
})


// filter public stream
var fs = require('fs')
var myList = ['texas', ‘longhorns’]
var stream = T.stream('statuses/filter', { track: myList})

stream.on('tweet', function (tweet) {
var jsonTweet = JSON.stringify(tweet)


// write to file
fs = require('fs');
fs.appendFile('myStream.json', jsonTweet + "\n", function (err) {
if (err) return console.log(err);


});
})

Be sure to add your login key, token secrets…

That’s it. Now just run the file.

node mysearch

A nice file (myStream.json) will start to accumulate in the folder you set it to.

You can modify your search my just changing the items in myList. The twit documentation also gives solid advice on how to modify the searches by location, language, etc.

When you want to kill it, just hit ctrl-C, but this thing can run for days, as long as you leave it on.

What to do with that file is up to you. I will post next week on update R code to get you some usable data. Taylor Smith has some good starter code and we are working on tightening that up a bit. (There are some kinks in the date/time portion)

The eventual extension will be to set the searches up on a server and let them run continuously on their own. I am currently installing node.js on my server and will be able to run these as chron jobs.

Another extension is to evaluate the tweets in real-time and send some sort of data to a browser. Node.js was created to allow you to run JavaScript code on the backend very efficiently.