Back
May 12, 2010

Twitter API, OAuth and decorators

In my current project I had a task to use twitter API. Twitter uses OAuth for authentication, which is pretty dreary. To avoid fiddling with it all the time, I've moved authentication to decorator, now it looks like this:

@twitter_api
def tweet_hello(request, api):
    api.update_status('hello')
# ...

Decorator checks if key is available, and, if needed - initiates authentication. User is redirected to twitter, grants permission and is redirected back to site, to the same place where he left off. If key is available - nothing happens, just view is launched as usual.

It's convenient that there's no need for additional twitter settings in user profile.

tweepy is used as an API wrapper.

def twitter_api(view):
    def wrapped(request, args, *kwargs):
        callback_url = absolute_url(oauth_endpoint)
        auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET, callback_url)

        if 'twitter_access_token' in request.session:
            key, secret =     request.session['twitter_access_token']
            auth.set_access_token(key, secret)
            return view(request, api=tweepy.API(auth), *args, **kwargs)

        request.session['twitter_action'] = request.path
        redirect_url = auth.get_authorization_url()
        request.session['twitter_request_token'] = (auth.request_token.key, auth.request_token.secret)
        return redirect(redirect_url)

    return wrapped


def oauth_endpoint(request):
    callback_url = absolute_url(oauth_endpoint)
    auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET, callback_url)
    key, secret = request.session.pop('twitter_request_token')
    auth.set_request_token(key, secret)
    verifier = request.REQUEST.get('oauth_verifier')
    auth.get_access_token(verifier)
    request.session['twitter_access_token'] = (auth.access_token.key, auth.access_token.secret)
    return redirect(request.session.pop('twitter_action'))
  • of course, you need to wrap everything in try..except blocks and process errors accordingly
  • absolute_url should return full url, with http://
  • apart from request.path you can also store POST and GET.
  • path can be passed as an argument to callback_url

Subscribe for the news and updates

More thoughts
Dec 22, 2024Technology
Python and the Point Rush in DeFi

This article demonstrates how to use Python to automate yield calculations in decentralized finance (DeFi), focusing on the Renzo and Pendle platforms. It guides readers through estimating potential rewards based on factors like token prices, liquidity, and reward distribution rules, emphasizing the importance of regular data updates and informed decision-making in DeFi investments.

Nov 27, 2024Technology
Stoicism At Work

This article explores how Stoic principles can be applied in the workplace to navigate stress, improve self-control, and focus on what truly matters, with practical examples from the author’s experience in software development.

May 9, 2018Technology
How to Generate PDF Files in Python with Xhtml2pdf, WeasyPrint or Unoconv

Programmatic generation of PDF files is a frequent task when developing applications that can export reports, bills, or questionnaires. In this article, we will consider three common tools for creating PDFs, including their installation and converting principles.

Jan 28, 2017Technology
Creating a site preview like in slack (using aiohttp)

In this article we will write a small library for pulling metadata and creating a preview for a site just like Slack does.

Dec 1, 2016Technology
How to Use Django & PostgreSQL for Full Text Search

For any project there may be a need to use a database full-text search. We expect high speed and relevant results from this search. When we face such problem, we usually think about Solr, ElasticSearch, Sphinx, AWS CloudSearch, etc. But in this article we will talk about PostgreSQL. Starting from version 8.3, a full-text search support in PostgreSQL is available. Let's look at how it is implemented in the DBMS itself.

Nov 21, 2016Technology
Crawling FTP server with Scrapy

Welcome all who are reading this article. I was given a task of creating a parser (spider) with the Scrapy library and parsing FTP server with data. The parser had to find lists of files on the server and handle each file separately depending on the requirement to the parser.