Scraping Tweets Directly from Twitters Search Page – Part 2
Published January 11, 2015
In the previous post we covered effectively the theory of how we can search and extract tweets from Twitter without having to use their API.
First, let’s have a quick recap of what we learned in the previous post. We have a URL that we can use to search Twitter with:
https://twitter.com/i/search/timeline
This includes the following parameters:
Key | Value |
---|---|
q | URL encoded query string |
f | Type of query (omit for top results or realtime for all) |
scroll_cursor | Allows to paginate through results. If omitted it returns first page |
We also know that Twitter returns the following JSON response:
{ |
Finally, we know that we can extract the following information for each tweet:
Selector | Value |
---|---|
div.original-tweet[data-tweet-id] | The authors twitter handle |
div.original-tweet[data-name] | The name of the author |
div.original-tweet[data-user-id] | The user ID of the author |
span._timestamp[data-time] | Timestamp of the post |
span._timestamp[data-time-ms] | Timestamp of the post in ms |
p.tweet-text | Text of Tweet |
span.ProfileTweet-action–retweet > span.ProfileTweet-actionCount[data-tweet-stat-count] | Number of Retweets |
span.ProfileTweet-action–favorite > span.ProfileTweet-actionCount[data-tweet-stat-count] | Number of Favourites |
Ok, recap done, let’s consider some pseudo code to get us started. As the example is going to be in Java, the pseudo code will take on a Java syntax.
searchTwitter(String query, long rateDelay) { |
Firstly, we define a function called searchTwitter, where we pass a query value as a string, and a specified time to pause the thread between calls. Given this string, we then pass to a function that creates our search URL based on our query. Then, in a while loop, we execute the search to return a TwitterResponse object that represents the JSON Twitter returns. Checking that the response is not null, it has more items, and we are not repeating the scroll cursor, we proceed to extract tweets from the items html, save them, and create our next search URL. We finally sleep the thread for however long we choose to with rateDelay, so we are not bombarding Twitter with a stupid amount of requests that could be viewed as a very crap DDOS.
Now that we’ve got an idea of what algorithm we’re going to use, let’s start coding.
I’m going to use Gradle as a the build system, as we are going to use some additional dependencies to make things easier. You can either download it and set it up on your machine if you want, but I’ve also added a Gradle wrapper (gradlew) to the repository so you can run without downloading Gradle. All you’ll need is to make sure that you’re JAVA_HOME Path variable is set up and pointing to wherever Java is located.
Lets take a look at the Gradle file.
apply plugin: 'java' |
As this is Java project, we’ve applied the java plugin. This will generate our standard directory structure that we get with Gradle and Maven projects: src/main/java src/test/java.
In addition, there are several dependencies I’ve included to help make the task a little easier. HTTPClient provides libraries that make it easier to construct URI’s, GSON is a useful JSON processing library that will allow us to convert the response query from Twitter into a Java object, and finally JSoup is an HTML parsing library that we can use to extract what we need from the inner_html value that Twitter returns to us. Finally, I’ve included JUnit, however I won’t go into unit testing with this example.
Lets start writing our code. Again, if you’re not familiar with gradle, the root for your packages should be in src/main/java. If the folders are not already there, you can auto generate, although feel free to look at the example code if you’re still unclear.
package uk.co.tomkdickinson.twitter.search; |
package uk.co.tomkdickinson.twitter.search; |
You’ll notice the additional method getTweets() in TwitterResponse. For now, just return an empty ArrayList, but we will revisit this later.
In addition to these bean classes, we also want to consider an edge case
where people might use this to search for an empty, null string, or the
query contains characters not allowed in a URL. Therefore to handle
this, we will also create a small Exception class called
InvalidQueryException.
package uk.co.tomkdickinson.twitter.search; |
Next, we need to create a TwitterSearch class and it’s basic structure. An important thing to consider here is we are interested in making the code reusable, so in the example I have made this abstract with an abstract method called saveTweets. The nice thing about this is it decouples the saving logic from the extraction logic. In other words, this will allow you to implement your own save solution without having to rewrite any of the TwitterSearch code. Additionally, you might also note that I’ve specified that the saveTweets method returns a boolean. This will allow anyone extending this to provide their own exit condition, for example once a certain number of tweets have been extracted. By returning false, we can indicate in our code to stop extracting tweets from Twitter.
package uk.co.tomkdickinson.twitter.search; |
Finally, lets also create a TwitterSearchImpl. This will contain a small implementation of TwitterSearch so we can test our code as we go along.
package uk.co.tomkdickinson.twitter.search; |
All this implementation does is print out our tweets date and text, collecting up to a maximum of 500 where the program should terminate.
Now we have the skeleton of our project set up, lets start implementing some of the functionality. Considering our pseudo code from earlier Let’s start with TwitterSearch.class:
public void search(final String query, final long rateDelay) { |
As you can probably tell, that is pretty much most of our main pseudo code implemented. Running it will have no effect, as we haven’t implemented any of the actual steps yet, but it is a good start.
Lets implement some of our other methods starting with constructURL.
public final static String TYPE_PARAM = "f"; |
First, we make a check to see if the query is valid. If not, we’re going to throw that InvalidQuery exception from earlier. Additionally, we may throw a MalformedURLException or URISyntaxexception, both caused by an invalid query string, so when caught we shall throw a new InvalidQuery exception. Next, using a URIBuilder, we build our URL using some constants we specify as variables, and the query and scroll_cursor value we pass. With our initial queries, we will have a null scroll cursor, so we also check for that. Finally, we build the URI and return as a URL, so we can use it to open up an InputStream later on.
Lets implement our executeSearch function. This is where we actually call Twitter and parse its response.
public static TwitterResponse executeSearch(final URL url) { |
This is a fairly simple method. All we’re doing is opening up a URLConnection for our Twitter query, then parsing that response using Gson as a TwitterResponse object, serializing the JSON into a Java object that we can use. As we’ve already implemented the logic earlier for using the scroll cursor, if we were to run this now, rather than the program terminating after a few seconds, it will keep running till there is no longer a valid response from Twitter. However, we haven’t quite finished yet as we have yet to extract any information from the tweets.
The TwitterResponse object is currently holding all the twitter data in it’s items_html variable, so what we now need to do is go back to TwitterResponse and add in some code that lets us extract that data. If you remember from earlier, we added a getTweets() method to the TwitterResponse object, however it’s returning an empty list. We’re going to fully implement that method so that when called, it builds up a list of tweets from the response inner_html.
To do this, we are going to be using JSoup, and we can even refer to some of those CSS queries that we noted earlier.
public List getTweets() { |
Let’s discuss what we’re doing here. First, we’re create a JSoup document from the items_html variable. This allows us to select elements within the document using css selectors. Next, we are going through each of the li elements that represent each tweet, and then extracting all the information that we are interested in. As you can see, there’s a number of catch statements in here as we want to check against edge cases where particular data items might not be there (i.e. user’s real name), while at the same time not using an all encompassing catch statement that will skip tweets if it is just missing a singular piece of information. The only value that we require to save the tweet here is the tweetId, as this allows us to fully extract information about the tweet later on if we so want. Obviously, you can modify this section to your hearts content to meet your own rules.
Finally, lets re run our program again. This is the final time, and you should now see tweets being extracted and printed out. That’s it. Job done, finished!
Obviously, there are many ways this code can be improved. For example, a more generic error checking methodology could be implemented to check against missing attributes (or you could just use groovy and ?). You could implement runnable in the TwitterSearch class to allow multiple calls to Twitter with a ThreadPool (although, I stress respect rate limits). You could even change TwitterResponse so it serializes the tweets as a list on creation, rather than extracting them from items_html each time you access them.