Edit (28 February 2017): Earlier today, it was announced that Mozilla had acquired Pocket with plans to gradually release the Pocket source code under Open Source licenses. I guess Wallabag will remain as a self hosted solution, but given an improved integration of Pocket with Firefox, I may move back to it in the near future.
I always have quite a bit on my pending list to read - academic papers, blogs, planets, and the sort. Usually, when I go through the planets, such as the Fedora, GNOME or the two neuroscience planets I use - neuroscience, neuroscientists, I don't have the time to read all the articles right then. I used to either bookmark links, or note them down somewhere to read later. One day, though, I ran into Pocket, which lets you save the article to read later and makes it available to you on multiple devices. It's extremely convenient.
Of course, the one issue with Pocket is that it isn't Free software. So, like I do, I went looking for an alternative. After a few hours, I ran into Wallabag on Github. It's written in PHP, and is licensed under the MIT license. It's quite easy to deploy, and there's a Gitter channel where you can get some help too.
You either enter the URL in the Wallabag page manually, or you can use the Firefox/Chrome/Opera addon - it let's you right click a page or a link and "Wallabag.it!". There's also a bookmarklet, which you can use with Pentadactyl, for example:
Wallabag fetches the text of the page and stores it for you so that you can read it later. You can even organise your saved pages with tags and the sort.
Here's a page that I'm trying to read later, for example:
I played with a deployment, but decided not to deploy and maintain an instance myself. Instead, I signed up for the instance Wallabag have here - Wallabag.it. It's quite cheap - they have an offer going too at the moment - only 9€ for an entire year!
Wallabag uses a the FiveFilters Full Text RSS tool to extract the text and other data from web pages. Some websites require special instructions to tell the tool what information needs to be extracted - this tends to happen with a few academic websites. There's a repository of such config files here. So, if you do run into a website that isn't rendering properly, you can troubleshoot the issue and submit a config file too.