The blog of Jerel Unruh2017-01-06T19:32:16+00:00https://jerel.coJerel UnruhBulk update unique values with Ecto and PostgreSQL2017-01-06T02:27:00+00:00https://jerel.co/blog/2017/01/bulk-update-unique-values-with-ecto-and-postgresql<p>I recently needed to retrieve the latest log for a specific device in a Phoenix app. Simple, use DISTINCT ON right?</p>
<hr />
<p>Nope! DISTINCT ON should be used carefully as it works great on an empty database but will have horrible performance when you get a large number of records in the table.</p>
<h4 id="the-problem">The Problem</h4>
<p>In my case I was saving streaming logs to a table that could be searched and analyzed later. However I needed to frequently fetch the latest log for a set of devices. While bulk insert allows me to batch streaming logs together and then insert them many at a time how can I query the table to get the last log for each device? My naive version looked like this:</p>
<div class="language-elixir highlighter-rouge"><pre class="highlight"><code><span class="no">Repo</span><span class="o">.</span><span class="n">all</span><span class="p">(</span><span class="n">from</span> <span class="n">l</span> <span class="ow">in</span> <span class="no">Logs</span><span class="p">,</span>
<span class="ss">where:</span> <span class="n">l</span><span class="o">.</span><span class="n">device_id</span> <span class="ow">in</span> <span class="o">^</span><span class="n">ids</span><span class="p">,</span>
<span class="ss">distinct:</span> <span class="n">l</span><span class="o">.</span><span class="n">device_id</span><span class="p">,</span>
<span class="ss">order_by:</span> <span class="p">[</span><span class="ss">desc:</span> <span class="n">l</span><span class="o">.</span><span class="n">updated_at</span><span class="p">])</span>
</code></pre>
</div>
<p>This is simple and easily returns the latest log for the specified devices. However it performs horribly, especially on data such as logs that may have millions of rows. Query times of 10+ seconds are possible with half a million rows.</p>
<p>In my case we didn’t have to have the <em>very</em> last log available to query so batching them together for a minute or two and doing a bulk update was a possibility. I ended up adding a foreign key to the device table itself that could be updated to point to a specific log entry.</p>
<p>Using <code class="highlighter-rouge">Repo.insert_all/3</code> with <code class="highlighter-rouge">on_conflict: :replace_all</code> isn’t possible because I only have the device ID at the time I’m inserting the log and I definitely don’t want to query devices every time I insert logs. Neither could I use <code class="highlighter-rouge">Repo.update_all/2</code> as it is meant to update many rows to the same value, what I need is to update many rows with their own values.</p>
<h4 id="the-solution">The Solution</h4>
<p>Luckily there’s a way to do this using Postgres and Ecto allows us to run raw SQL queries for complex use cases such as this.</p>
<p>Assuming we have two tables that look like this:</p>
<div class="language-elixir highlighter-rouge"><pre class="highlight"><code><span class="k">defmodule</span> <span class="no">MyApp</span><span class="o">.</span><span class="no">Device</span> <span class="k">do</span>
<span class="kn">use</span> <span class="no">MyApp</span><span class="o">.</span><span class="no">Web</span><span class="p">,</span> <span class="ss">:model</span>
<span class="n">schema</span> <span class="sd">"</span><span class="s2">devices"</span> <span class="k">do</span>
<span class="n">field</span> <span class="ss">:name</span><span class="p">,</span> <span class="ss">:string</span>
<span class="n">belongs_to</span> <span class="ss">:last_log</span><span class="p">,</span> <span class="no">MyApp</span><span class="o">.</span><span class="no">Log</span>
<span class="k">end</span>
<span class="k">end</span>
<span class="k">defmodule</span> <span class="no">MyApp</span><span class="o">.</span><span class="no">Log</span> <span class="k">do</span>
<span class="kn">use</span> <span class="no">MyApp</span><span class="o">.</span><span class="no">Web</span><span class="p">,</span> <span class="ss">:model</span>
<span class="n">schema</span> <span class="sd">"</span><span class="s2">logs"</span> <span class="k">do</span>
<span class="n">field</span> <span class="ss">:data</span><span class="p">,</span> <span class="ss">:string</span>
<span class="n">field</span> <span class="ss">:time</span><span class="p">,</span> <span class="no">Timex</span><span class="o">.</span><span class="no">Ecto</span><span class="o">.</span><span class="no">DateTime</span>
<span class="k">end</span>
<span class="k">end</span>
</code></pre>
</div>
<p>We can insert the data as shown below. Note that since this is a raw query you’ll need to coerce <code class="highlighter-rouge">device_ids</code> and <code class="highlighter-rouge">log_ids</code> to whatever <code class="highlighter-rouge">Postgrex</code> expects. Likely integer but may be <code class="highlighter-rouge">Ecto.UUID</code> if you use uuids for your keys.</p>
<div class="language-elixir highlighter-rouge"><pre class="highlight"><code><span class="c1"># not shown: use Repo.insert_all to save logs, then deduplicate them so we have the latest log for each device</span>
<span class="n">device_ids</span> <span class="o">=</span> <span class="p">[</span><span class="m">1</span><span class="p">,</span> <span class="m">2</span><span class="p">,</span> <span class="m">3</span><span class="p">]</span>
<span class="n">log_ids</span> <span class="o">=</span> <span class="p">[</span><span class="m">5000</span><span class="p">,</span> <span class="m">6001</span><span class="p">,</span> <span class="m">6003</span><span class="p">]</span>
<span class="n">sql</span> <span class="o">=</span> <span class="sd">"""
UPDATE devices
SET last_log_id = tmp.last_log_id
FROM
(SELECT unnest($1::integer[]) AS id, unnest($2::integer[]) AS last_log_id) AS tmp
WHERE devices.id = tmp.id
"""</span>
<span class="no">Ecto</span><span class="o">.</span><span class="no">Adapters</span><span class="o">.</span><span class="no">SQL</span><span class="o">.</span><span class="n">query</span><span class="p">(</span><span class="no">Repo</span><span class="p">,</span> <span class="n">sql</span><span class="p">,</span> <span class="p">[</span><span class="n">device_ids</span><span class="p">,</span> <span class="n">log_ids</span><span class="p">])</span>
</code></pre>
</div>
<p>Now reading the data is simple and super fast: whenever we fetch the device records we can select logs by their primary key. Storing the data is fast too, a few milliseconds to update a batch of devices to point to their latest log.</p>
Connect to a remote Elixir node deployed with Distillery2016-12-09T02:27:00+00:00https://jerel.co/blog/2016/12/connect-to-a-remote-elixir-node-deployed-with-distillery<p>When building Elixir apps with Distillery I don’t install Elixir on the server as Distillery bundles everything it needs.</p>
<hr />
<p>I found that I always had to google and fiddle around a bit to connect Observer to a running app so decided to document it for myself and others.</p>
<h4 id="the-code">The Code</h4>
<p>Most existing tutorials or docs that I found assumes that epmd is accessible on the remote. This method needs nothing but standard Linux tools.</p>
<div class="language-bash highlighter-rouge"><pre class="highlight"><code><span class="gp">me@local:~$ </span>ssh user@example.com
<span class="gp">user@app1:~$ </span>netstat -ntlap | grep LISTEN
tcp 0 0 0.0.0.0:4369 0.0.0.0:<span class="k">*</span> LISTEN 3443/epmd
tcp 0 0 0.0.0.0:39566 0.0.0.0:<span class="k">*</span> LISTEN 3439/beam.smp
tcp 0 0 0.0.0.0:5432 0.0.0.0:<span class="k">*</span> LISTEN 3548/postgres
tcp 0 0 0.0.0.0:5000 0.0.0.0:<span class="k">*</span> LISTEN 3439/beam.smp
<span class="c"># on local machine use ports from top two rows above to create two tunnels (3rd row is postgres and 4th is web app)</span>
<span class="gp">me@local:~$ </span>ssh -L 4369:localhost:4369 -L 39566:localhost:39566 user@example.com
user@app1:~<span class="err">$</span>
<span class="c"># on local machine in a different terminal</span>
<span class="gp">me@local:~$ </span>iex --name debug@127.0.0.1 --cookie your-cookie <span class="c"># found as -setcookie in rel/<app>/var/vm.args</span>
<span class="gp">iex(debug@127.0.0.1)1> </span>Node.connect<span class="o">(</span>:<span class="s2">"my_app@127.0.0.1"</span><span class="o">)</span> <span class="c"># found as -name in rel/<app>/var/vm.args</span>
<span class="gp">iex(debug@127.0.0.1)1> </span>:observer.start
<span class="c"># in the Observer window that opens you can now select the remote node from the Nodes menu</span>
</code></pre>
</div>
<h4 id="results">Results</h4>
<p><img src="/assets/blog/observer.png" alt="elixir observer" /></p>
<p>If you have a better / more concise way of connecting let me know.</p>
Why I'm excited about Elixir and Phoenix2016-01-21T02:10:00+00:00https://jerel.co/blog/2016/01/why-im-excited-about-elixir-and-phoenix<p>For the last couple years I've been writing Python for my server side applications. I really enjoy Django and Django REST Framework and how they enable me to build rapidly. However the last few months I've been studying the Elixir language and its main framework, <a href="http://www.phoenixframework.org/">Phoenix</a>. This post explains why I'm betting heavily on it for the future.</p>
<hr />
<p>To start off with I’ll explain how I need to use my frameworks and then I’ll follow up with an explanation of how Phoenix approaches those problems better.</p>
<h4 id="the-problem">The Problem</h4>
<p>I have a SaaS application that provides soft real time GPS tracking for public safety departments. The server receives GPS and other data streams from various hardware on ambulances or personnel and broadcasts them to the correct clients. The client applications stay open for weeks at a time and constantly receive data over websockets.</p>
<p>Now, using a traditional web framework such as Django presents a problem: how do you handle long lived, highly concurrent connections using a framework designed around the classic request/response? Initially developers added bidirectional communication to their applications by subscribing to a service such as Pusher. That’s reasonable for some apps but not for something that you are building an entire business on. You can add websockets in front of your HTTP framework using something like <a href="https://github.com/hendrix/hendrix">Hendrix</a> or Tornado but based on my experience getting good performance isn’t easy and most libraries that you are used to reaching for are blocking. You could write everything in Node and either just use one of your server’s CPUs or write userland code to leverage them all. Or you can split your codebase (microservices amiright?) letting your favorite framework handle API calls and Node handle your websockets.</p>
<p>In my architecture I went with the last option and it’s good but not great.</p>
<h4 id="let-me-introduce-elixir-and-otp">Let me introduce Elixir and OTP</h4>
<p>Elixir is a dynamic, functional language that compiles down to run on the Erlang VM. It was <a href="https://en.wikipedia.org/wiki/Elixir_(programming_language)">created by José Valim</a> and shares some similarity to Ruby, syntactically.</p>
<blockquote>
<p>Any sufficiently complicated microservices deployment contains an ad hoc, informally-specified, bug-ridden implementation of half of [OTP] – <a href="https://twitter.com/littleidea/status/532927711472549888">@littleidea</a></p>
</blockquote>
<p>Let me follow that up with another quote:</p>
<blockquote>
<p>If you watch the software industry backwards, it starts with kids flailing; ends with old guys solving impossible problems by thinking hard. – <a href="https://twitter.com/garybernhardt/status/152455259543961600">@garybernhardt</a></p>
</blockquote>
<p>It turns out that lots of the problems that the web is running into today have already been solved. Erlang was created by Ericsson back in ‘86 to run telephone networks (when’s the last time your phone was “down for routine maintenance”?) and was open sourced in ‘98. One of the greatest strengths of Erlang (and by extension Elixir) is concurrency. Back in the 80s they didn’t have phenomenal CPUs like <a href="http://accessories.us.dell.com/sna/productdetail.aspx?c=us&l=en&s=&cs=04&sku=319-2142&dgc=ST&cid=293344&lid=5616479&acd=12309152537461010&ven1=sE1inYhPj&ven2=,#Overview">the Xeon E7</a> but they did have lots of slower ones so Erlang and OTP (the set of libraries supporting deployment, distribution, etc) were designed to scale horizontally.</p>
<p>Consequently today it is trivial to spread the processing of an Elixir application across many CPUs and many machines. Anything that takes a significant amount of time (like a web request or a websocket) is done in its own super lightweight process (an Erlang VM process; not an operating system process). Then when that task is finished the process is garbage collected. If you need to do slow work like wait on an external API then just do it! It will only hold up your own process. A single server can run hundreds of thousands to millions of processes easily with the work being distributed across all available cores. If one client does something that creates an error only that process dies and restarts, not your entire app!</p>
<h4 id="and-now-lets-talk-about-phoenix">And now let’s talk about Phoenix</h4>
<p><a href="http://www.phoenixframework.org/">Phoenix</a> is the framework built to leverage and simplify all of this. One of the biggest features in my mind is Channels which is a thin abstraction on top of websockets (it provides a keep alive heartbeat, hooks like join, handle message, etc). Channels aren’t tacked on as an after thought, they are core to the framework and can be reasoned about much like HTTP requests. Phoenix Channels ship with websocket and longpolling transports (with client libraries to match) but you can completely swap them for a custom solution of your own such as a UDP transport.</p>
<p>Phoenix also provides the things you would expect in a framework like code generators to help you get started quickly, migrations, security helpers like CSRF, form helpers, templating, etc.</p>
<p>A Phoenix application is just another OTP app. If you’re working on a large project dealing with all sorts of different protocols and one of them just happens to be the web you’re in luck. Or if you’re building a simple blog and don’t have a clue what OTP is you’re still in luck.</p>
<p>Phoenix uses the concept of a “connection”, usually referred to in code as <code class="highlighter-rouge">conn</code>. When an HTTP request comes into the app a <code class="highlighter-rouge">conn</code> data structure is created and is passed through the app being transformed by the framework functions first (request headers read, origin checked, request body parsed, etc) and then by your code (data inserted to the database, flash messages set, etc) until the response is created and the <code class="highlighter-rouge">conn</code> returned to the browser. <em>Channels are handled in much the same way</em> only instead of a stateless request/response they are a stateful conversation between client and server. With channels the <code class="highlighter-rouge">conn</code> exists for the duration of the websocket connection so you can store data on it such as a user ID or permissions.</p>
<p>Phoenix is functional. Every backend framework I’d used before was object oriented and I assumed functional programming, the GNU project, and Gentoo had a lot in common. It’s turning out to be quite user friendly and really nice… as mentioned in the example above you have data (a connection) that you perform transformations on until it reaches the state you want. In Phoenix these steps of transformations are called Plugs which are Elixir modules with functions <code class="highlighter-rouge">init</code> (compile time) and <code class="highlighter-rouge">call</code> (run time) defined. Plugs are not entirely unlike middleware in other frameworks but in Phoenix almost <em>everything</em> is a plug. CSRF protection in the framework? A plug. Body parsing in the framework? A plug. Authentication in your code? Write a plug. Permissions? Write a plug. Do you want different permissions in a couple controllers? Include a plug in those controllers. If you don’t like something that the framework does then swap out that plug. You starting to get the picture? :) And if at any point you wonder what the state of your app is you can <code class="highlighter-rouge">IO.inspect(conn)</code> and everything is there, as data.</p>
<h4 id="in-closing">In closing</h4>
<p>Today’s internet is more than just a network of documents. It still does that too but it’s now also expected to do telephony, control medical equipment, stream movies, and talk to your thermostat. As we web developers assume more of that responsibility lets remember to look away from the tools that we’ve used for years on CRUD apps and evaluate ways that may be better.</p>
An introduction to EmberJS for Django developers2015-07-10T21:00:00+00:00https://jerel.co/blog/2015/07/an-introduction-to-emberjs-for-django-developers<p>Back when I was in the automotive industry I noticed a stark difference between the engineering of domestic and import (especially German) vehicles. While they look much the same on the outside, under the hood they use a very different approach. On German cars hoses are connected with quick-connect latches and wire connectors have twist locks. American cars use worm or squeeze clamps on hoses and pinch locks on wires. The contrast between single page applications and server applications are much the same: to the consumer they are similar but working on one requires us to use the opposite side of our brains. As is the case with cars once you learn to think in the same manner as the original engineers everything just makes sense. This post is an attempt to explain some of the things I've learned the hard way over the last several years.</p>
<hr />
<p><img src="/assets/blog/ember.png" alt="ember.png" />
<img src="/assets/blog/django.png" alt="django.png" /></p>
<h4 id="were-not-building-pages">We’re not building “pages”</h4>
<p>The first concept to grasp is that we don’t serve a fully formed HTML page at every URL (except in the case of Universal JavaScript apps, but we’ll leave that topic alone for now). Instead we serve the same basic HTML page that includes our framework assets and our app’s JavaScript code. This page is often just a static file that is served by Nginx or Apache for every URL. The app then “boots” taking over the screen and rendering the HTML from minified templates and JSON data. It uses the URL segments to determine what to display.</p>
<h4 id="the-client-app-doesnt-care-about-your-server-framework">The client app doesn’t care about your server framework</h4>
<p>Don’t think of client side apps as an extension of your server side. Usually they are separate (I store mine in separate git repositories) and only communicate via JSON over an API. You can even serve your client app from a CDN. If your server side can parse and render JSON it will work with Ember, React, native applications, etc. If you do embed your client side app inside a server rendered page then you can share its session authentication but I have had good luck with authentication via tokens just as you would authenticate a mobile app.</p>
<h4 id="client-apps-are-long-lived">Client apps are long lived</h4>
<p>When you’re using jQuery plugins to decorate a server rendered page you can count on a page refresh once in a while to clear memory and reset state. I have one Ember app that runs on a wall mounted monitor in a dispatch office and is only refreshed for upgrades: memory leaks will not go unnoticed here! Use long-lived-state to your advantage, if data is fetched when the app first loads it will stay in memory until released so there’s no need to make an ajax request each time it is needed. This can also present challenges when API updates are made as a client may keep using the old API for days.</p>
<h4 id="beware-of-hype">Beware of hype</h4>
<p>Don’t write a website as an isomorphic JavaScript application that talks to a Go API over websockets just because it’s the next big thing. I wrote this blog on Ember a couple years ago because I wanted to experiment with it. Would I write everything as a client side app now? No. But if it’s an application that a customer uses all day every day or you need to do something very complex then client side applications are amazing.</p>
<h4 id="the-client-side-is-complex">The client side is complex</h4>
<p>When you have a long lived client talking to a stateless API (and maybe a websocket) you are working with one of the harder problems in computer science: distributed computing. Give yourself time and treat it like any other development field. You won’t learn iOS development in a week and you won’t master Ember in a week either.</p>
<h4 id="terminology">Terminology</h4>
<p>Remember, we need to think with the other side of our brain here. Some of the same words are borrowed from server frameworks but they mean different things when in the context of a client application.</p>
<ul>
<li><code class="highlighter-rouge">router.js</code> - this file maps URLs to routes. It is the equivalent of the urls.py in a Django application</li>
<li><code class="highlighter-rouge">route</code> - somewhat equivalent to Django views or Django REST Framework resources. They have a <code class="highlighter-rouge">model</code> function that is called when the route is entered and that function needs to return data (usually fetched from the API via ajax). Routes handle navigating from one state to another (and keep you from breaking the Back button).</li>
<li><code class="highlighter-rouge">model</code> - the client side representation of your data (I use Ember Data, an excellent project). Models in my apps often somewhat match Django models but they aren’t 1 to 1. Model instances have a <code class="highlighter-rouge">save()</code> method on them and generally work a little like you’d expect a Django model to. Relationships can be fetched via <code class="highlighter-rouge">instance.get('relation')</code>.</li>
<li><code class="highlighter-rouge">component</code> - components have two pieces <code class="highlighter-rouge">acme.js</code> and <code class="highlighter-rouge">acme.hbs</code> and are most similar to Django template tags. The hbs file is in handlebars format and contains HTML that is compiled and shipped in the <code class="highlighter-rouge">client.js</code> file. The component’s js file handles the interactivity aspect, data manipulation related to display, and DOM events. The hbs file is optional as sometimes you may want a component to handle a single DOM element. Rather than setting up jQuery to listen to DOM events you will use `` in the hbs file to tell Ember to call the <code class="highlighter-rouge">example</code> action function in your component when the element is clicked, touched, etc.</li>
<li><code class="highlighter-rouge">adapter</code> - Ember Data uses an adapter to specify how to talk to an API. If you follow conventions by using a package like <a href="https://github.com/django-json-api/rest_framework_ember">Rest Framework Ember</a> you will rarely write an adapter.</li>
<li><code class="highlighter-rouge">serializer</code> - Ember Data also uses serializers to munge data from one format (a non-standard API) to the format expected by Ember Data. Once again if following conventions you will rarely write a serializer.</li>
<li><code class="highlighter-rouge">controller</code> - you may see controllers referenced in old documentation. Ember is moving away from them as they were poorly named. Controllers were simply long lived objects and can be replaced with components (if the use case is related to the DOM) or services (for the shared long lived object aspect).</li>
<li><code class="highlighter-rouge">view</code> - views are also a relic of the past. Views used to play the role that components now do except they were more “course” as they were typically paired 1:1 with routes.</li>
</ul>
<h4 id="development-and-building">Development and building</h4>
<p>The Ember community is standardized around the Ember CLI for development, package management, and asset building. This makes it very very simple to start new projects and introduce additional developers to a project. File structure, build pipeline, asset management, it’s all provided out of the box. Install it with <code class="highlighter-rouge">npm install -g ember-cli</code> and you’re ready to create an app with <code class="highlighter-rouge">ember new app-name</code>. You can even proxy all calls from the development domain to your Django install to avoid CORS errors by running <code class="highlighter-rouge">ember serve --proxy=http://127.0.0.1:8000</code>. Ember CLI provides live reload, file watching, and building. After running <code class="highlighter-rouge">ember build</code> you will find everything that you need to deploy in the <code class="highlighter-rouge">dist</code> folder.</p>
<h4 id="shared-effort">Shared effort</h4>
<p>Much like pypi Ember developers can combine efforts by sharing Ember Add-ons. Browse <a href="http://emberobserver.com">Ember Observer.com</a> to see the many add-ons that you can install. Calendar widgets, loading indicators, animation, deployment, there’s a lot there.</p>
<p>This is a short blog post for such a huge topic but hopefully it will help you in the right direction. Let me know if there’s specific aspects you’d like me to write about in depth and I’ll attempt to cover them in future posts.</p>
Korean QNIX IPS monitor and Apple displays working on Ubuntu Linux2013-10-28T04:00:00+00:00https://jerel.co/blog/2013/10/korean-qnix-ips-monitor-and-apple-displays-working-on-ubuntu-linux<p>I recently bought a QNIX 2560x1440 27" Korean IPS monitor to add screen real estate to my XPS 13" ultrabook. Like most peripherals in Linux I expected it to just work when I plugged it in. Wrong. It did nothing at all.</p>
<hr />
<p>When I tried to configure it via Ubuntu’s Displays manager it just errored out.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>The selected configuration for displays could not be applied
could not assign CRTCs to outputs:
Trying modes for CRTC 63
CRTC 63: trying mode 1920x1080@60Hz with output at 1920x1080@60Hz (pass 0)
none of the selected modes were compatible with the possible modes:
Trying modes for CRTC 63
Trying modes for CRTC 64
Trying modes for CRTC 65
CRTC 63: trying mode 1920x1080@60Hz with output at 1920x1080@60Hz (pass 0)
none of the selected modes were compatible with the possible modes:
Trying modes for CRTC 63
Trying modes for CRTC 64
Trying modes for CRTC 65
CRTC 63: trying mode 1920x1080@40Hz with output at 1920x1080@60Hz (pass 0)
CRTC 63: trying mode 1680x1050@60Hz with output at 1920x1080@60Hz (pass 0)
CRTC 63: trying mode 1680x1050@60Hz with output at 1920x1080@60Hz (pass 0)
CRTC 63: trying mode 1600x1024@60Hz with output at 1920x1080@60Hz (pass 0)
CRTC 63: trying mode 1400x1050@60Hz with output at 1920x1080@60Hz (pass 0)
... and so on
</code></pre>
</div>
<p>So I started studying. I found [a post with a modified EDID file](/assetstely for me so I kept searching. Eventually I came across <a href="http://ubuntuforums.org/showthread.php?t=1808585&page=2">a forum post about making Apple Cinema displays work on Ubuntu</a>. Since these Korean IPS displays are actually Apple panels inside that got me to thinking that they probably are subject to the same trickery that Apple used in the Cinema panels and that this would just be a cleaner way to accomplish the same thing that the custom EDID did.</p>
<p>I used <code class="highlighter-rouge">cvt</code> to give me the correct modeline (my monitor is 2560x1440 and based on that forum post I used a 45 hz refresh rate).</p>
<div class="highlighter-rouge"><pre class="highlight"><code>jerel@laptop:~$ cvt 2560 1440 45
# 2560x1440 44.94 Hz (CVT) hsync: 66.52 kHz; pclk: 227.75 MHz
Modeline "2560x1440_45.00" 227.75 2560 2720 2992 3424 1440 1443 1448 1480 -hsync +vsync
</code></pre>
</div>
<p>then plugged the monitor in to my DisplayPort and applied that generated modeline with xrandr. (don’t copy my xrandr lines below as the modeline probably won’t be right for your monitor. Use the value from your cvt command.)</p>
<div class="highlighter-rouge"><pre class="highlight"><code>jerel@laptop:~$ xrandr --newmode "2560x1440_45.00" 227.75 2560 2720 2992 3424 1440 1443 1448 1480 -hsync +vsync
jerel@laptop:~$ xrandr --addmode DP1 "2560x1440_45.00"
</code></pre>
</div>
<p>And it came to life!</p>
<p><img src="/assets/blog/desk.jpg" alt="desk.jpg" /></p>
<p>However this doesn’t last through a reboot so I created an <code class="highlighter-rouge">xorg.conf</code> file so X can handle the display itself. Ubuntu 13.04 no longer needs an <code class="highlighter-rouge">/etc/X11/xorg.conf</code> as it auto detects everything so you’ll most likely need to create the file.</p>
<p>Below is my minimal <code class="highlighter-rouge">xorg.conf</code>. I have the new Korean IPS plugged into a mini DisplayPort to active DVI adapter. If you want to see what your displays are named run <code class="highlighter-rouge">xrandr -q</code> Note: Do not use my modeline values, they will most likely not work. Generate your own with cvt and your display’s resolution values as I did above</p>
<div class="highlighter-rouge"><pre class="highlight"><code>Section "Monitor"
Identifier "DP1"
Modeline "2560x1440_45.00" 227.75 2560 2720 2992 3424 1440 1443 1448 1480 -hsync +vsync
Option "PreferredMode" "2560x1440_45.00"
EndSection
</code></pre>
</div>
<p>Reboot and the display should now be able to be hotplugged and can have its options (like turning off sticky edges and setting position) managed via the Displays window in Ubuntu.</p>
<p>I was concerned that I would come up with a solution that would apply the IPS settings anytime something was plugged into the DisplayPort and break projectors or other displays. However I’ve checked and this method allows my other displays to be detected as usual if I use one of them instead of the new IPS.</p>
30 day challenge2013-07-04T20:00:00+00:00https://jerel.co/blog/2013/07/30-day-challenge<p>I've watched Matt Cutts set and achieve his <a href="http://www.ted.com/talks/matt_cutts_try_something_new_for_30_days.html">30 day challenges</a> for a couple years now and I've always thought that it would be a great thing to try myself.</p>
<hr />
<h4 id="the-plan">The Plan</h4>
<p>Today I’m going to bite the bullet and start my own 30 day challenge: <strong>intense exercise for 30 minutes or more each and every day</strong>. While I’ve thoroughly enjoyed mountain biking in the past it’s always subject to my work schedule. Which is another way of saying “infrequent”. It’s happened before where I’ve woke up in the morning, sat down to write code and haven’t got up until evening. We know that’s not a healthy practice but… there’s so many problems that need solved and all I have to do to solve them is rattle my keyboard. It’s addictive.</p>
<p>I’ve been alive for well over 20 years now and I’ve seen a lot of months come and go. However I don’t think I’ve ever done one thing every single day for 30 days in a row. Sure I’ll work out 2 or 3 times a week but inevitably I miss a day or two. Then the next week it’s even easier to skip another day. I want to create a habit that will make me healthier and clear my mind.</p>
<h4 id="the-reward">The Reward</h4>
<p>My main reward is to feel better. I don’t feel bad now but if I’ve been biking regularly I feel on top of the world. I would like a road bike too… we have beautiful winding roads to explore near my house. If I can prove to myself that I will use it regularly I may finally buy a nice road bike.</p>
<h4 id="join-me">Join Me?</h4>
<p>Why not start your own 30 day challenge? The month will come and go whether you do something or not. <a href="http://twitter.com/jerelunruh">Mention me on Twitter</a> or leave a comment to let me know what you’re going to do.</p>
Software and seatbelts. Protecting the consumer2013-05-03T22:10:00+00:00https://jerel.co/blog/2013/05/software-and-seatbelts-protecting-the-consumer<p>This blog post is a result of a discussion I read on Twitter among a number of developers who were debating whether Wordpress should be criticized for the quality of its codebase. One point raised was that since Wordpress is hugely popular it must be good enough for the common user. A counter point stated that far more should have been done to keep the codebase secure and modern on behalf of the common user.</p>
<hr />
<p>It set me to thinking about our responsibility when releasing or selecting a product, so here is a blog post.</p>
<h4 id="seatbelts-and-airbags">Seatbelts and Airbags</h4>
<p>On January 1, 1968 the federal law requiring vehicles to ship with seatbelts went into effect. Obviously the nation had taken a look at the vehicular mortality rates and decided that seat belts would be a “best practice”. The federal law mandated that all vehicles must follow the best practice.</p>
<p>In the early 1990s the airbag law was passed mandating that by 1997 all cars must ship with airbags. This too had been concluded to be a good preventative measure for violent crashes.</p>
<h4 id="software-and-its-seatbelts">Software and its seatbelts</h4>
<p>My thought is that we as developers that release software or provide software as a service have a responsibility to follow our own industry’s best practices in order to <strong>protect less knowledgeable users</strong>. We don’t have a governing body to demand compliance (neither do we want one) so we must be guided by the desire to protect those who are trusting us.</p>
<p>The best practices that affect our industry include items such as:</p>
<ul>
<li>Password hashing with salts. Never store user passwords as plain text, md5, etc.</li>
<li>Don’t email users their passwords. If you do this you have probably also broken rule 1.</li>
<li>Don’t violate a user’s trust by abusing information they shared with you. (phone, email, or text spam for example)</li>
<li>Battle tested input sanitation. If you roll your own as you code you will be vulnerable at some point.</li>
<li>Login limiting. With known usernames and no login limits a bot is free to brute force your user accounts.</li>
<li>Test your code. Prevent regressions in code you ship. Speed up development. Make it easier for other developers to work with your code.</li>
<li>Separation of concerns. Don’t place database logic in your html. Designers don’t deserve that; developers don’t deserve that.</li>
<li>Code reuse. Use packages, write packages and share them. The more that conventions are used and packages are shared the easier one developer can pick up where another left off.</li>
</ul>
<p>Some of the above points affect the end-users such as website visitors that sign up for an account. Others such as separation of concerns only affect the developers who will be working with the code. This is also important! Many business owners have built upon poorly organized software only to realize that they had nowhere to go once their business started becoming successful and they needed to scale or customize. Since they were already invested they may have ploughed ahead sinking much more labor into the project than necessary.</p>
<p>If you’re a software consumer build on the correct platform to start with. If at all possible consult with someone knowledgeable to help you decide. If you didn’t and think you got it wrong don’t be afraid to ask advice now and switch to a different solution. The less time you spend beating a dead horse the better.</p>
<h4 id="change-is-hard">Change is hard</h4>
<p>If you are leading a software project nothing worthwhile comes easy. The key is to push for the best, don’t stagnate. If you are an industry leader do your best to implement and invent best practices. Push your users/developers to learn with you. If you don’t you will end up with an entire community around you that is stale. Considering the speed of innovation on the web your solution from 10 years ago most likely needs to be rethought.</p>
<p>Changing may require a change in direction or a deep refactor. Only you can decide if this should be done to your own projects. However don’t be afraid of pushing your community to advance. The alternative is a slow and bitter death by becoming irrelevant.</p>
<h4 id="psychology">Psychology</h4>
<p>Perhaps our default is to have a legacy mindset toward software. Humans often want to use something because it is familiar, popular, or convenient, not because it’s the correct tool for the job. Wordpress is not always the answer, PHP is not always the answer, nor PyroCMS, nor Rails, nor Word, or even Windows or Mac. Look at a wide array of tools, try them out, ask people smarter than yourself.</p>
<p>Then when someone asks you to make “Wordpress into a corporate CMS” or to “recode PyroCMS in ASP.NET” you can explain why it should or should not be done instead of blindly hacking it to fit their vision. (those are actual customer requests by the way)
Make a difference</p>
<p>If you are a developer or designer and you see an open source project lacking then see if you can pitch in. In many cases it will be welcomed. Newish projects may very well not be implementing best practices due to a lack of manpower or money. Older projects may reject help because they aren’t willing for breaking changes. In the latter case it must wait on the leadership’s vision and desire to push their community.</p>
<p>If you are an integrator or freelancer that advises others refuse to use software that doesn’t measure up. Kindly educate others on deficiencies that have been swept under the rug for years. By continuing to say “it’s not right but it’s good enough” we are furthering the adoption of bad practices.</p>
<p>Lastly nobody is perfect. Approach everybody with respect and the initial assumption that they are doing their best.</p>
Getting started with developing cross-framework Composer packages2012-09-22T22:10:00+00:00https://jerel.co/blog/2012/09/getting-started-with-developing-cross-framework-composer-packages<p>There's a bit of a renaissance going on in the PHP world right now, and it's a very good thing. Many of us developers are used to identifying ourselves as being a "CodeIgniter developer" or a "CakePHP developer", or maybe even a "Zend developer". The new PHP-Fig standards and a beautiful package management system named Composer may change all that.</p>
<hr />
<h4 id="how-and-why">How and why?</h4>
<p>If you are a CodeIgniter developer this will look very familiar to you:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>$this->load->library('upload');
$data = $this->upload->do_upload();
</code></pre>
</div>
<p>Now let’s say you use the CodeIgniter upload library and you like the features it has. But you don’t like anything else about CodeIgniter. Now what? You could convert the library to use camelCase instead of snake_case and figure out how to load it in your framework and you would end up with a port that would work (once you replaced all the little framework dependent bits inside the class).</p>
<p>But now Joe that uses the Acme framework sees it and he likes the upload library but nothing else about your framework. So he ports it to the Acme framework… Ultimately many hours of labor have been wasted with no real gain. From now on you each maintain your separate copies and add features that you need with little profit from each other’s labors.</p>
<h4 id="the-solution">The Solution</h4>
<p>Enter PSR-0, the interoperability standard. The PSR-0 convention guarantees that classes written for one project can be dropped into another project and their namespaces and classes will coexist happily with all your other PSR-0 code. Since the directory structure must match the namespacing it makes autoloading a breeze.</p>
<h4 id="the-tools">The Tools</h4>
<p>Now it’s time to talk about <a href="http://getcomposer.org">GetComposer.org</a> and <a href="http://packagist.org">Packagist.org</a>. Composer is a wonderful tool for installing and updating all of your PSR-0 packages independently. Packagist is Composer’s directory that tracks those packages so that developers (and Composer) have a single place to look for compatible code.</p>
<p>Now when you decide to write the world’s best upload package you write it to match the PSR-0 style, push it to Github, and tell Packagist.org about it. Joe from the Acme framework wants to use it so he installs it with Composer and everyone is happy. You keep improving the package but Joe doesn’t need to worry, he specified in his composer.json config file that his project needs Uploader v1.0.x so when he runs <code class="highlighter-rouge">composer update</code> he will get bug fixes but no breaking changes.</p>
<h4 id="testing">Testing</h4>
<p>There is another very real benefit to using Composer packages. Every package that you use should have its own test coverage. Now instead of installing some random class that you found on the internet and wearing out the Ctrl + R on your keyboard working out the bugs you can install a package knowing that it is stable. I use phpUnit along with Guard to run my tests as I work.</p>
<h4 id="im-sold-how-do-i-start">I’m sold! How do I start?</h4>
<p>It’s actually quite simple to develop packages for use with Composer. The first thing you will need is to read through the PSR-0 guide so that you understand the namespace style and the autoloading.</p>
<p>Now install Composer. You can find instructions at getcomposer.org. On my Linux machine I just run:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>curl -s https://getcomposer.org/installer | php
</code></pre>
</div>
<p>Now create a folder to hold your Composer packages and inside that create a composer.json file. This file is used to specify which packages the project will need. We want to install our new package inside this project but since our new package won’t be submitted to Packagist until it is finished we need to specify a Github repository for Composer to fetch it from.</p>
<div class="highlighter-rouge"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nt">"repositories"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nt">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"vcs"</span><span class="p">,</span><span class="w">
</span><span class="nt">"url"</span><span class="p">:</span><span class="w"> </span><span class="s2">"http://github.com/jerel/upload"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nt">"require"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nt">"jerel/upload"</span><span class="p">:</span><span class="w"> </span><span class="s2">"master"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre>
</div>
<p>That’s it for the project file itself. Now we are ready to make the first files of the actual package that we’re creating. To do this first init a Git repository (it can be anywhere, it doesn’t need to be in the project folder yet) and create a composer.json file in its root. Then open the file and make it look something like this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"jerel/upload"</span><span class="p">,</span><span class="w">
</span><span class="nt">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"library"</span><span class="p">,</span><span class="w">
</span><span class="nt">"description"</span><span class="p">:</span><span class="w"> </span><span class="s2">"This is the world's best upload package!"</span><span class="p">,</span><span class="w">
</span><span class="nt">"keywords"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"upload"</span><span class="p">,</span><span class="w"> </span><span class="s2">"more upload"</span><span class="p">],</span><span class="w">
</span><span class="nt">"homepage"</span><span class="p">:</span><span class="w"> </span><span class="s2">"http://github.com/jerel/upload"</span><span class="p">,</span><span class="w">
</span><span class="nt">"license"</span><span class="p">:</span><span class="w"> </span><span class="s2">"MIT"</span><span class="p">,</span><span class="w">
</span><span class="nt">"require"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nt">"predis/predis"</span><span class="p">:</span><span class="w"> </span><span class="s2">"v0.7.3"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nt">"authors"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
</span><span class="p">{</span><span class="w">
</span><span class="nt">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Jerel Unruh"</span><span class="p">,</span><span class="w">
</span><span class="nt">"email"</span><span class="p">:</span><span class="w"> </span><span class="s2">"some-email@jerel.co"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">],</span><span class="w">
</span><span class="nt">"require"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nt">"php"</span><span class="p">:</span><span class="w"> </span><span class="s2">">=5.3.0"</span><span class="w">
</span><span class="p">},</span><span class="w">
</span><span class="nt">"autoload"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nt">"psr-0"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="nt">"Jerel"</span><span class="p">:</span><span class="w"> </span><span class="s2">"core/"</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre>
</div>
<p>You will notice that this composer.json requires the predis package. We might want to use Redis for a resizing queue so we just tell Composer that anytime it installs our upload package it also needs to install the predis package. Isn’t that cool? You don’t have to tell other developers to install a bunch of different packages, if they install yours Composer will automatically install everything that it needs.</p>
<p>Now commit this file and push it to the repository that you specified in the project’s composer.json file (the first one you created). Now that it is pushed to the master branch of that repo you will be able to install it via Composer. <code class="highlighter-rouge">cd</code> to the project directory and run <code class="highlighter-rouge">composer install</code></p>
<p>This will create a directory structure like this: <code class="highlighter-rouge">project/vendor/jerel/upload/</code> Now you can <code class="highlighter-rouge">cd</code> to <code class="highlighter-rouge">project/vendor/jerel/upload</code> and start writing your code. You will notice that in the package’s composer.json there is this line <code class="highlighter-rouge">"psr-0": {"Jerel": "core/"}</code></p>
<p>This tells Composer that when the <code class="highlighter-rouge">Jerel</code> namespace is used it should use the <code class="highlighter-rouge">project/vendor/jerel/upload/core</code> folder as the root to load classes from. So <code class="highlighter-rouge">project/vendor/jerel/upload/core/Jerel/Upload.php</code> will be autoloaded when you use <code class="highlighter-rouge">$uploader = new Jerel\Upload;</code></p>
<p>That’s about it.</p>
<p>Create a <code class="highlighter-rouge">tests</code> folder for your unit tests in <code class="highlighter-rouge">project/vendor/jerel/upload</code> and go to town with your development. When you have published a package to Packagist.org you can remove the <code class="highlighter-rouge">"repositories": [ ]</code> info from your project’s composer.json as Composer will search for the package name on Packagist.org by default. So any later projects you do can use your awesome uploader library by simply placing this in a composer.json file and running <code class="highlighter-rouge">composer install</code>:</p>
<div class="highlighter-rouge"><pre class="highlight"><code><span class="p">{</span><span class="w">
</span><span class="nt">"require"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
</span><span class="nt">"jerel/upload"</span><span class="p">:</span><span class="w"> </span><span class="s2">"1.0.*"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre>
</div>
The computer that lives in an aquarium2012-07-18T18:00:00+00:00https://jerel.co/blog/2012/07/the-computer-that-lives-in-an-aquarium<p>I needed a new computer and this time I decided to go with something a little less dull than the motherboard in a tin box… I knew that the guys at pugetsystems.com had put together a computer similar to this and it had ran without problems for a couple years. They had used the Eclipse System 6 (6 gallon) aquarium and I went with that too since an ATX motherboard fits perfectly along the back wall.</p>
<hr />
<h4 id="time-to-show-newegg-some-love">Time to show Newegg some love…</h4>
<p>Here is my parts list:</p>
<ul>
<li>GA-EP43T-USB3 GIGABYTE motherboard (Pros: 10 usb ports and support for 16GB ram)</li>
<li>4GB (for now) of G.SKILL RAM 1600 DDR3</li>
<li>430 watt Thermaltake PSU. I hate picking power supplies. This one had good reviews so I ran with it.</li>
<li>fanless video card with the NVIDIA GeForce 9400 GT chip and 512MB memory</li>
<li>Intel Core 2 Duo E7400 Wolfdale 2.8GHz processor</li>
<li>Western Digital 640GB 7200 RPM hard drive</li>
<li>Two Acer 20” monitors</li>
<li>5 gallons of mineral oil from my local veterinary</li>
<li>Ubuntu 10.04</li>
</ul>
<p>Here are a few pictures of the build process:</p>
<h4 id="the-unaltered-aquarium">The unaltered aquarium</h4>
<p><img src="/assets/blog/1.jpg" alt="1.jpg" /></p>
<h4 id="marked-with-masking-tape">Marked with masking tape</h4>
<p><img src="/assets/blog/2.jpg" alt="2.jpg" /></p>
<h4 id="after-having-the-top-trimmed">After having the top trimmed</h4>
<p><img src="/assets/blog/3.jpg" alt="3.jpg" /></p>
<h4 id="holes-drilled-for-the-motherboard">Holes drilled for the motherboard</h4>
<p><img src="/assets/blog/5.jpg" alt="5.jpg" /></p>
<h4 id="with-everything-fit-in-place">With everything fit in place</h4>
<p><img src="/assets/blog/7.jpg" alt="7.jpg" /></p>
<h4 id="the-side-view-of-the-wire-routing">The side view of the wire routing</h4>
<p><img src="/assets/blog/8.jpg" alt="8.jpg" /></p>
<h4 id="the-back-of-the-aquarium-and-the-bottom-of-the-motherboard">The back of the aquarium (and the bottom of the motherboard)</h4>
<p><img src="/assets/blog/9.jpg" alt="9.jpg" /></p>
<h4 id="top-view-with-the-lid-off">Top view with the lid off</h4>
<p><img src="/assets/blog/10.jpg" alt="10.jpg" /></p>
<h4 id="the-hard-drive-is-mounted-to-the-bottom-of-the-lid">The hard drive is mounted to the bottom of the lid</h4>
<p><img src="/assets/blog/11.jpg" alt="11.jpg" /></p>
<h4 id="a-view-of-the-motherboard-connections-with-the-access-lid-open">A view of the motherboard connections with the access lid open</h4>
<p><img src="/assets/blog/12.jpg" alt="12.jpg" /></p>
<h4 id="all-done">All done!</h4>
<p><img src="/assets/blog/14.jpg" alt="14.jpg" /></p>
Free virtualization on the Linux desktop without VirtualBox or VMware2012-05-24T18:00:00+00:00https://jerel.co/blog/2012/05/free-virtualization-on-the-linux-desktop-without-virtualbox-or-vmware<p>For a number of years I have ran virtual machines on Linux for testing purposes and to run software that demanded a Windows environment. I started with VirtualBox back in the day and never bothered to switch. Until now. I've been working with Proxmox (a free and open source hypervisor that makes use of KVM) a lot and decided to try something besides VirtualBox for my new laptop. Here's how you can set it up.</p>
<hr />
<p>I am running Ubuntu 12.04 on my laptop (from here on I’ll refer to it as “host”). Virtualization is done via KVM and the gui management is done with Virtual Machine Manager. When finished you will be able to create, run, manage, and delete virtual machines from a friendly interface. Your end goal is this:</p>
<p><img src="/assets/blog/booting-xp.png" alt="booting-xp.png" /></p>
<p>And you can manage the machine from this slick interface:</p>
<p><img src="/assets/blog/xp-performance.png" alt="xp-performance.png" /></p>
<p>Ready? Let’s get started!</p>
<p>First we want to make sure your host processor can support virtualization. Open a terminal and run this command:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>egrep -c '(vmx|svm)' /proc/cpuinfo
</code></pre>
</div>
<p>If it returns 1 or greater your hardware will work with KVM. Depending on your computer manufacturer you may need to enable virtualization in your BIOS.</p>
<p>Now let’s install the three pieces of software. In your open terminal window run this command to download and install:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>sudo apt-get install kvm libvirt-bin virt-manager
</code></pre>
</div>
<p>Now go to your menu and open the newly installed Virtual Machine Manager. If it doesn’t add to the menu automatically then type <code class="highlighter-rouge">virt-manager</code> in your terminal and then pin the icon to the menu manually. When it opens it should say <code class="highlighter-rouge">localhost (QEMU) Connecting</code> and then successfully connect. If it says <code class="highlighter-rouge">Not Connected</code> or when you try to create a machine you get <code class="highlighter-rouge">Error: no active connection to install on.</code> then you have one more step left to do. In your terminal type this command:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>sudo gedit /etc/group
</code></pre>
</div>
<p>In that file (probably on the last line) you will see a line that looks like this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>libvirtd:x:127:
</code></pre>
</div>
<p>You want to add your username to the end of that line. So mine would end up like this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>libvirtd:x:127:jerel
</code></pre>
</div>
<p>Now reboot your host machine. libvirtd is the api that Virtual Machine Manager uses to talk to KVM. Without your user being added to its group it doesn’t have permission to connect. With this done you should now be ready to open Virtual Machine Manager and create your first machine. When Virtual Machine Manager opens click the icon in the top left corner and a wizard will guide you through creating your first machine. You will need an installation cd or an ISO image of the operating system that you wish to use.</p>
<p>Happy virtualizing and let me know how it goes in the comments below.</p>
A simple solution to CodeIgniter CSRF protection and Ajax2012-03-16T17:00:00+00:00https://jerel.co/blog/2012/03/a-simple-solution-to-codeigniter-csrf-protection-and-ajax<p>We all know that we should enable CSRF protection if we want to make our apps resistent to cross site attacks. But what if we already have an Ajax heavy application with tons of POST requests? Do we want to go through the app and add code to each Ajax request? I don't. Here's a simple solution if you are using jQuery.</p>
<hr />
<h4 id="first-the-basics">First the basics</h4>
<p>You can read more about this with a quick Google search but here’s a crash course: CodeIgniter sets a cookie with a hash as a value. Then when the page is loaded the <code class="highlighter-rouge">form_open()</code> helper adds a hidden input to your forms that contains the same value. When the form is submitted by the visitor that hidden input is compared with the cookie value. If they do not match the request is rejected before it ever reaches your controller. This keeps malicious users from submitting a request to your controller from outside your site, possibly by redirecting your own logged in browser.</p>
<p>Traditionally the solution is to read the cookie value before doing an Ajax request and then send it along. This works fine unless you have numerous places to add the code or if you are building an extendable system and other developers may not know how to solve this problem in their own code.</p>
<h4 id="a-solution">A solution</h4>
<p>jQuery has the perfect tool for this job: <code class="highlighter-rouge">$.ajaxSetup</code>. You can read all about it here: jQuery documentation. <code class="highlighter-rouge">$.ajaxSetup</code> allows us to pass data along with every request so it is the perfect fit for sending our CSRF token. All post data from Ajax functions throughout your application will be merged with the data set by <code class="highlighter-rouge">$.ajaxSetup</code>. As far as they are concerned the CSRF token doesn’t exist. The only thing needed for this to work is jQuery and the Cookie plugin. You can download the cookie plugin from here.</p>
<h4 id="the-code">The code</h4>
<div class="highlighter-rouge"><pre class="highlight"><code>$(function($) {
// this bit needs to be loaded on every page where an ajax POST may happen
$.ajaxSetup({
data: {
csrf_test_name: $.cookie('csrf_cookie_name')
}
});
// now you can use plain old POST requests like always
$.post('site.com/controller/method', { name : 'Jerel' });
});
</code></pre>
</div>
<p>Now if you did <code class="highlighter-rouge">var_dump($_POST)</code> in your controller at <code class="highlighter-rouge">site.com/controller/method</code> you would see “Jerel” and the csrf token (something like <code class="highlighter-rouge">3a92ba230fd952a2bcd6faa311b07015</code>).</p>
<h4 id="any-catches">Any catches?</h4>
<p>Only one that I know of. It appears that the BlueImp uploader will not inherit the data from <code class="highlighter-rouge">$.ajaxSetup</code> so you will have to pass the cookie value manually in BlueImp as part of its data. I’m not entirely sure why this is as their documentation states that it uses jQuery’s <code class="highlighter-rouge">$.ajax</code>. If you have an explanation let me know.</p>
Using Python for super fast regex search and replace2011-12-21T00:50:00+00:00https://jerel.co/blog/2011/12/using-python-for-super-fast-regex-search-and-replace<p>I recently needed to do a regex search and replace on a large MySQL file. I often use my code editor for search & replace but I tried Komodo Edit, Sublime Text 2, and Gedit and they struggled greatly to open the file and none of them could search it. I know there's sed, grep + awk, etc. but I decided to give Python a try since I've been working at learning it.</p>
<hr />
<p>I was amazed at how quickly Python searched the file. It hardly made a blip on my CPU usage. Here’s what I used:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>[jerel@laptop ~] $ python
# import the modules that we need. (re is for regex)
import os, re
# set the working directory for a shortcut
os.chdir('/home/jerel/Desktop')
# open the source file and read it
fh = file('test.sql', 'r')
subject = fh.read()
fh.close()
# create the pattern object. Note the "r". In case you're unfamiliar with Python
# this is to set the string as raw so we don't have to escape our escape characters
pattern = re.compile(r'\(([0-9])*,')
# do the replace
result = pattern.sub("('',", subject)
# write the file
f_out = file('test.sql', 'w')
f_out.write(result)
f_out.close()
</code></pre>
</div>
<p>This works fine for fairly large files altho I’m expecting that it wouldn’t work for files in excess of 4GB on a 32 bit system as the entire file is read into memory. If you are working with massive files then it’d probably be wise to iterate over the file object or use the fileinput module. I’m not the guy to ask quite yet :)</p>
Importing data into PyroCMS using Migrations2011-12-08T00:50:00+00:00https://jerel.co/blog/2011/12/importing-data-into-pyrocms-using-migrations<p>A client asked me to import a bunch of data from his old CMS into PyroCMS. His old system had insecure passwords and a bunch of data that was tied to the user via the user id. I wrote a Migration to pull the data out of his old tables, register a secure PyroCMS user, and record the user's uploads and other data. Here's how and why…</p>
<hr />
<p>CodeIgniter’s Migrations make this a fairly simple task. <code class="highlighter-rouge">$this</code> is available in the migration so you can easily load models, libraries, dbforge and so on to help process or insert the data.</p>
<p>To do this create a new migration named <code class="highlighter-rouge">070_Import.php</code> and drop it into <code class="highlighter-rouge">system/cms/migrations</code>. (PyroCMS’ migrations are at 69 at the present). Adjust it to one number higher than the highest in the migration folder.</p>
<p>The migration file’s contents should look like this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code><?php defined('BASEPATH') OR exit('No direct script access allowed');
class Migration_Import extends CI_Migration {
public function up()
{
// import logic here
}
public function down()
{
// undo the import here
}
}
</code></pre>
</div>
<p>Now import the old tables into the main database and move all of the files (images, zips, or whatever) assocciated with the records into a folder within <code class="highlighter-rouge">uploads/</code>. Now you can write the logic to pull the data out of the old table, clean it up if need be, and then insert it into a PyroCMS’ table. Uploads can be renamed and copied to a different folder in <code class="highlighter-rouge">uploads/</code>.</p>
<p>When you are ready to try the import simply open <code class="highlighter-rouge">system/cms/config/migration.php</code> and increment the <code class="highlighter-rouge">migration_version</code> to one number higher. Reload the page and the <code class="highlighter-rouge">up()</code> method in the migration will run.</p>
<p>More than likely your import won’t go properly the first time you try due to oversites or mistakes in your code. This is where the down() method comes in. Write a couple of lines of code that delete the newly created records and the newly copied files in <code class="highlighter-rouge">uploads/</code>. If your import fails, lower the number in the config file by 1 and reload the page. Make necessary tweaks to your import code, increment the number in the config, and reload the page.</p>
<p>When you are confident that the import is finished you can delete the migration file. You then need to open your database and look for a table named <code class="highlighter-rouge">migration_version</code> or <code class="highlighter-rouge">schema_version</code> depending on the version of PyroCMS. Change its value and the migration config file’s value back to what it was before you started this import exercise.</p>
Removing a module name from the url in PyroCMS or CodeIgniter2011-11-15T02:00:00+00:00https://jerel.co/blog/2011/11/removing-a-module-name-from-the-url-in-pyrocms-or-codeigniter<p>Let's say we are writing a module named "vacations" that will display information about tourist destinations. Sometimes it's desirable to display your urls like: site.com/bahamas instead of site.com/vacations/bahamas. So here's how you can do that in more than one module…</p>
<hr />
<p>In the main application route file you will need to change one line. Open <code class="highlighter-rouge">system/cms/config/routes.php</code> and set <code class="highlighter-rouge">$route['404_override']</code> to ‘vacations’ or whatever your module name is. By default it is set to ‘pages’ in PyroCMS.</p>
<p>Now when a visitor tries to view the page called site.com/contact he will get that page just like usual if it exists. But when he visits site.com/bahamas CodeIgniter will first turn the request over to the default controller (pages) which attempts to find a page with that uri. When no page is found it continues on to the 404 handler which is now our ‘vacations’ module. Now we have an opportunity to retrieve information about ‘bahamas’ from the database and display it. Everything works as if “vacations” was set as the default module.</p>
<p>However since our module is set as the <code class="highlighter-rouge">404_override</code> it’s our responsibility to show a 404 when no information is found to match the uri segment within our module. This can be done by using <code class="highlighter-rouge">show_404('404');</code></p>
<p>I’ve found that this works quite well and the only “waste” is the one query that the Pages (or default) module makes to check for a page with that uri.</p>
Generating a project changelog using Git log2011-07-27T03:00:00+00:00https://jerel.co/blog/2011/07/generating-a-project-changelog-using-git-log<p>This is a simple and quick way to generate a changelog for your project using your git commit messages.</p>
<hr />
<p>By default git allows you to format its log messages while it outputs them. By using this feature we can wrap our commit message, create a link to the actual commit, and all sorts of other fun. Then by running it through grep we can filter it further. Here is an example:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>git log v2.2.0...v2.2.1 --pretty=format:'<li> <a href="http://github.com/jerel/project/commit/%H">view commit &bull;</a> %s</li> ' --reverse | grep -v Merge
</code></pre>
</div>
<p>Output (link is 404):</p>
<p><a href="http://github.com/jerel">view commit</a> • Fixed a bug in the sympathy sector of the artificial intelligence module.</p>
<h4 id="now-for-a-quick-explanation">Now for a quick explanation:</h4>
<ul>
<li><code class="highlighter-rouge">v2.2.0...v2.2.1</code> tells git log that we only want it to return the commits between those two tags in our project.</li>
<li>In the <code class="highlighter-rouge">--pretty=format:' ' </code> flag we pass the html that we want generated for each commit using the <code class="highlighter-rouge">%H</code> and <code class="highlighter-rouge">%s</code> placeholders to output the hash and the message, respectively.</li>
<li><code class="highlighter-rouge">--reverse</code> outputs the commits in the order they were made instead of the most recent first.</li>
<li>Finally with <code class="highlighter-rouge">| grep -v Merge</code> we pipe the output through grep and specify that we don’t want the changelog cluttered up with merge messages (any message with the string “Merge” in it is discarded).</li>
</ul>
<p>You could even stick this in a post-receive hook to generate a dynamic changelog. Or use grep to only show messages that contain a keyword (like github does with their “Closes issue #501” feature).</p>
An introduction to Git and Git website deployments2011-06-04T17:00:00+00:00https://jerel.co/blog/2011/06/an-introduction-to-git-and-git-website-deployments<p>This post is an introduction for web developers who may have never used version control before. Learn how to control your websites' source code and even deploy the site without using ftp.</p>
<hr />
<p><em>Note: All commands that I have posted are from the Linux command line. If you are on a different platform you’ll need to make adjustments where necessary. Most of the workflow on your local machine can also be done with a gui interface so feel free to do so. I also assume that you have ssh access to your web server.</em></p>
<h4 id="get-git">Get Git</h4>
<p>First you will need to install the git software on your local machine and on your server. You can download it from git’s download page or by using your system’s package manager:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>sudo apt-get install git-core
</code></pre>
</div>
<h4 id="creating-your-first-project">Creating your first project</h4>
<p>Your project files can be anywhere on your computer, git doesn’t care. To create a new project (called a repository) you need to cd to the root of your source code and init a git repository. Like this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>cd /var/www/public_html/my_website
git init
</code></pre>
</div>
<p>Alternatively you may want to clone an existing repository from somewhere. In that case do not run the git init command. Use git clone instead.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>cd /var/www/public_html/
git clone git@github.com:organization/my_website.git
</code></pre>
</div>
<p>Now if you look for a hidden folder in the root of that directory there will be one named .git. That folder will contain all of the history, the source for the different branches, etc. If you want to move your project to a different directory or even rename it it’s no problem. Just make sure the .git folder moves with it. As long as the .git folder stays safe you can even delete all the files for your project and they can still be restored using “git reset”.</p>
<h4 id="adding-the-files">Adding the files</h4>
<p>After initializing a new git repository you must tell git which files you want it to track (cloning has this step done already). You can do this via the command line using the “add” command.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>git add --all
</code></pre>
</div>
<p>This will make git start tracking all files in the project directory. If you don’t want to add all of the files you can specify the file name instead of the –all flag. You can also create a .gitignore file in the root of your directory that specifies which files git should leave alone. You may want to add upload folders, cache folders, etc. to the .gitignore file.</p>
<h4 id="the-first-commit">The first commit</h4>
<p>Git now knows that the website files exist but it is not controlling them. To do this you need to do a git commit:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>git commit -a -m 'This is the first commit.'
</code></pre>
</div>
<p>The -a flag means that you want to add all files that git knows about into the index. The -m flag is for the commit message.</p>
<h4 id="pushing-to-a-server">Pushing to a server</h4>
<p>The first way that we’ll talk about is pushing to a user account named “git” on your remote server. The repositories you push there will be for collaboration and backup purposes. For example if Jim and Bob both work from home one can edit files, make commits, push to the server and the other can pull from the server, edit more files, make his own commits, and push back. And if both of their laptops were destroyed at the same time all they would have to do to get all of their work onto the new machines is do a git clone on the remote repositories.</p>
<p>To get this working you’ll need to ssh into your server. Add a user account named “git” (or whatever you like) and set up ssh keys on that account for everybody that will have access to the repositories. Now create a folder, cd to it, and initiate a bare git repo. You must use the –bare flag when creating a repo that users will pull from.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>mkdir /home/git/my_website.git
cd /home/git/my_website
git init --bare
</code></pre>
</div>
<p>Now you are ready to add your server as a remote (a remote is shorthand so you don’t have to type the whole path every time). And push your files. When you push you need to specify the branch name (usually master until you get more advanced and start using additional branches).</p>
<div class="highlighter-rouge"><pre class="highlight"><code>git remote add origin ssh://git@example.com/home/git/my_website.git
git push origin master
</code></pre>
</div>
<h4 id="pushing-to-deploy-a-website">Pushing to deploy a website</h4>
<p>The second way that we’ll talk about is deploying the site using git. It is very similar except the remote repository must be configured a little differently. Ssh into your server and create the folder in the web directory. Then you need to init a non-bare repository just like you did on your desktop. That’s important! A bare repository like you created in the first server example does not have a copy of the directory tree. The file information is all stored within git. That will not work for a live website.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>mkdir /home/client/my_website
cd /home/client/my_website
git init
</code></pre>
</div>
<p>We now need to do a couple tweaks to the git config on the remote server to allow us to “overwrite” the files when we do a git push. First open /home/client/my_website/.git/config in your favorite editor and add the following code to the bottom:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>[receive] denyCurrentBranch = false
</code></pre>
</div>
<p>Then create /home/client/my_website/.git/hooks/post-receive, paste this into it, and make sure the file is executable:</p>
<div class="highlighter-rouge"><pre class="highlight"><code><span class="c">#!/bin/sh</span>
<span class="c"># Update the working tree after changes have been pushed here</span>
<span class="nb">cd</span> ..
env -i git reset --hard
</code></pre>
</div>
<p>Alright! Now we just need to set up the local workstation to push it live. What we’re doing here is adding another remote. You’ll still have the “origin” remote that you use while you’re working but when you are ready to update the site you will push to live instead of origin.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>git remote add live ssh://git@example.com/home/client/my_website
git push live master
</code></pre>
</div>
<p>You can also repeat this process if you want a staging site. Then all you have to do is “git push staging master”, let your quality assurance team or your client review it, and then “git push live master”.</p>
<h4 id="working-with-an-upstream-repository">Working with an upstream repository</h4>
<p>Now supposing you cloned a content management system and you want to keep your website up-to-date you can add a git remote called “upstream”. This allows you to pull the latest release from their repository, merge it into your copy, and push it to live. I’ll give an example using PyroCMS. (This is all done on your workstation)</p>
<div class="highlighter-rouge"><pre class="highlight"><code>cd /var/www/public_html/my_website
git remote add upstream git@github.com:pyrocms/pyrocms.git
git pull upstream tag/v1.2.0
//do some testing locally
git push live master
</code></pre>
</div>
<h4 id="working-with-a-github-fork">Working with a GitHub fork</h4>
<p>If you have forked a project on github then you can send code back to the main project via your fork. Just add the fork as a remote just like the “Pushing to a server” example.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>git remote add fork git@github.com:username/project.git
git pull upstream master
git push fork master
//you now know that your fork is up-to-date. Time to make your edits.
git commit -a -m 'I fixed a bug for you guys.'
git push fork master
</code></pre>
</div>
<p>Now go to your GitHub control panel and open a pull request to let the team know that your fork has code that they want.</p>
<p>One important note: you will not always push your contributions to the master branch. For example on the PyroCMS project most bug fixes need to be on the develop branch as the master branch is reserved for releases. Check the documentation for the project you are contributing to. Second more important note: I used the <code class="highlighter-rouge">git commit -a</code> command in the example. Make sure that you do not commit sensitive data (like database passwords) and push it as it then becomes public.</p>
<h4 id="merges-conflicts-and-how-to-handle-them">Merges, conflicts, and how to handle them</h4>
<p>If there is more than one person working on a project soon you will have a merge conflict. When you start your work you should always pull from the shared repository. This gets the latest edits from your team members and reduces the number of merges. Now suppose you pulled, edited the index.php file, committed it, and pushed, all would be fine unless someone else happened to do the same thing and pushed before you did. If they pushed first then you will get an error like this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>To git@example.com/home/git/my_website.git
! [rejected] master -> master (non-fast forward)
error: failed to push some refs to 'git@example.com/home/git/my_website'
</code></pre>
</div>
<p>If this happens then you need to pull again. Git will then attempt to merge the files together. If the same file has been edited by multiple people then it will place markers in the file to show which lines are older and which are newer and then warn you that the merge failed. You then open those marked files with your editor and decide which lines you want to keep. Commit the files now that the markers are removed and then push again.</p>
<h4 id="closing-thoughts">Closing thoughts…</h4>
<p>I’m aware that there’s many many ways to do things with git. Especially site deployments. However this method seems the simplest to me and doesn’t require much setup. If you can ssh to the account you can use git to manage the files. I’d also like to give credit to Phil Sturgeon as he has a <a href="http://philsturgeon.co.uk/blog/2010/02/Deploying-websites-with-Git">blog post</a> about deploying with this method. I know this isn’t an exhaustive guide but hopefully it will get you started in the right direction.</p>
<p>If you need better control over who has access to the repositories on your own server check out gitosis. It allows you to manage your repositories via git itself. You can allow some users to read-only, some to read-write, etc. Also you may find it worthwhile to buy a GitHub subscription instead of hosting your own repositories.</p>
<p>When you get into branching I would recommend reading about <a href="http://nvie.com/posts/a-successful-git-branching-model/">git-flow</a>. It is a wonderful way to organize branches and it helps to make sure the code you release is as stable as possible. (We use it on the PyroCMS project. That’s why code contributions are not welcome on the master branch)</p>
Email newsletters for your website using the Newsletter module for PyroCMS2011-02-05T00:30:00+00:00https://jerel.co/blog/2011/02/email-newsletters-for-your-website-using-the-newsletter-module-for-pyrocms<p>v1.0 of the Newsletter module was released on February 7, 2011 and adds an exciting new functionality to the PyroCMS platform.</p>
<hr />
<p>As of v1.0 the Newsletter module has these features:</p>
<ul>
<li>Email templates (comes with 4 already installed)</li>
<li>Track unique & total clicks on embedded links</li>
<li>Track unique & total newsletter opens</li>
<li>{ pyro: Tags are available in email body</li>
<li>Subscribers can unsubscribe with a single click</li>
<li>Optionally unsubscribe an address from the admin panel</li>
<li>Export subscriber list from admin panel</li>
<li>Send newsletters from your browser or with a cron job</li>
<li>Separate From and Reply addresses</li>
<li>Limit the number of emails sent at once (for picky servers or ISPs)</li>
</ul>
<p>All template and email editing is done with the familiar WYSIWYG editor. Images can also be uploaded and inserted into the email body using the editor. Requires minimal configuration to get started… if you can send emails with the Contact form your server can also send newsletters.</p>
<p>Screenshots:</p>
<p><img src="/assets/blog/create.png" alt="create.png" />
<img src="/assets/blog/send.png" alt="send.png" />
<img src="/assets/blog/settings.png" alt="settings.png" />
<img src="/assets/blog/templates.png" alt="templates.png" /></p>
<p>You can easily automate the sending of emails by setting up a cron job. For example if you were to use curl your command might look like this:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>//send at 1am every night
0 1 * * * curl --silent --compressed http://www.example.com/newsletters/cron/gy84kn
</code></pre>
</div>
<p>If you had a large mailing list and couldn’t send the whole list at once you could simply set a limit in Settings->Newsletters and set multiple cron jobs:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>//send at 1am
0 1 * * * curl --silent --compressed http://example.com/newsletters/cron/gy84kn
//send again at 1:10am
10 1 * * * curl --silent --compressed http://example.com/newsletters/cron/gy84kn
//and again at 1:20am
20 1 * * * curl --silent --compressed http://example.com/newsletters/cron/gy84kn
</code></pre>
</div>
<p><a href="http://pyrocms.com/store/details/newsletters">You can buy a copy of it here.</a> If you have more questions or if you have feature requests for future versions let me know in the comments.</p>
Getting started with custom module development for PyroCMS2011-02-01T00:30:00+00:00https://jerel.co/blog/2011/02/getting-started-with-custom-module-development-for-pyrocms<p>To help make the development of PyroCMS modules easier I decided to put together a simple module that could be used for a template.</p>
<hr />
<p>While you will not use all of these folders in the average module the following is the supported folder structure for a module.</p>
<p><img src="/assets/blog/folder-structure.jpg" alt="folder-structure.jpg" /></p>
<p>And at the bare minimum you will have the details.php file and the controllers and views folders.</p>
<p>Download the code (links below), rename the zip and the folder within it to “sample”, and upload it to your installation of PyroCMS to see how it functions. It is hosted at <a href="http://github.com/pyrocms/sample">http://github.com/pyrocms/sample</a> if you happen to be a git user. I intend for the code to be a template to get you started so do what you want with it. Just don’t redistribute it as a tutorial under your name.</p>
<p>Leave your thoughts in the comments below and if you have questions once your development is under way refer to PyroCMS’ <a href="http://documentation.pyrocms.com/">documentation</a> and <a href="http://forum.pyrocms.com">forums</a>.</p>
<p>Update: As of June 21, 2011 This module has been mostly rewritten (I wrote the first version in a big hurry and never finished it to my liking). Create, Edit, and Delete all function properly and the front-end uses Tags only - no php.</p>
<p>Update: As this post is nearing 2 years old I removed the old download link and replaced it with the links below. Download the appropriate version for your installation.</p>
<ul>
<li><a href="https://github.com/pyrocms/sample/zipball/2.1/master">PyroCMS v2.1.x</a></li>
<li><a href="https://github.com/pyrocms/sample/zipball/2.0/master">PyroCMS v2.0.x</a></li>
<li><a href="https://github.com/pyrocms/sample/zipball/1.3/master">PyroCMS v1.3.x</a></li>
</ul>
Using firefox on a headless server to make screenshots of websites2010-10-11T15:30:00+00:00https://jerel.co/blog/2010/10/using-firefox-on-a-headless-server-to-make-screenshots-of-websites<p>I needed a reliable way to generate screenshots of websites for a client's project I'm working on. I wasn't too wild about using a third party like thumbalizr or webshots if I could do it myself. I decided to detail my ordeal here in the hopes it will help somebody else…</p>
<hr />
<p>I’m using Xvfb + Firefox + Imagemagick on a Ubuntu server to accomplish the task. In case you’re not familiar with them Xvfb allows Firefox to output just like it would on a monitor even though there is no screen. Imagemagick then grabs the screenshot from Firefox. And if you don’t know what a server is then this probably isn’t the solution for you :)</p>
<p>Install Xvfb, firefox, and imagemagick</p>
<div class="highlighter-rouge"><pre class="highlight"><code>sudo apt-get install xvfb firefox imagemagick
</code></pre>
</div>
<p>Start Xvfb with the desired screen dimensions on virtual display 1 (1280x960 gives an image with an aspect ratio of 4:3)</p>
<div class="highlighter-rouge"><pre class="highlight"><code>Xvfb :1 -screen 0 1280x960x24 &
</code></pre>
</div>
<p>Open desired url in Firefox on virtual display 1</p>
<div class="highlighter-rouge"><pre class="highlight"><code>DISPLAY=:1 firefox http://www.google.com &
</code></pre>
</div>
<p>Grab a screenshot of the window using imagemagick’s import feature, crop it, compress, and save thumbnail as screenshot.jpg in the screenshots folder of the server.</p>
<div class="highlighter-rouge"><pre class="highlight"><code>DISPLAY=:1 import -window root -crop 1264x948+0+0 -resize 200x150 -quality 90 /var/www/screenshots/screenshot.jpg
</code></pre>
</div>
<p>You should now have a screenshot of firefox + google.com
Now… modify firefox so the browser frame and navigation bar is hidden.</p>
<p>Create a new firefox profile on a local computer. Install the autohide addon from http://www.krickelkrackel.de/autohide/ <em>Update: After working with this some more I decided to do it without any help from addons. After hiding all bars just adjust the imagemagick crop dimensions to remove the top of the browser window.</em> Tweak firefox until all toolbars, status bar, etc is hidden when you open the browser. Turn off addons updates etc. Basically turn everything off besides javascript. Now copy the firefox profile folder to your server.</p>
<p>Firefox also was not maximized by default. So I edited localstore.rdf around line 30 and changed the main-window dimensions to match that of Xvfb</p>
<p>To keep firefox from trying to restore the last session add <code class="highlighter-rouge">user_pref("browser.sessionstore.resume_from_crash", false);</code> to the prefs.js</p>
<p>Now for a summary. Here’s how to run all of the commands at once:</p>
<div class="highlighter-rouge"><pre class="highlight"><code>DISPLAY=:1 firefox http://www.cnn.com & sleep 5 && DISPLAY=:1 import -window root -crop 1264x948+0+25 -resize 200x150 -quality 90 /var/www/screenshots/screenshot.jpg && pkill firefox
</code></pre>
</div>
<p>The “pkill firefox” is to make sure firefox is closed after the screenshot is generated. Otherwise you will end up with many many tabs open.</p>
<p>And this is what we get for our efforts:</p>
<p><img src="/assets/blog/screenshot.jpg" alt="screenshot generated with ubuntu server + firefox" /></p>
<p>If you have any improvements or questions please leave them in the comments.</p>
Reinstalling Ubuntu with separate home partition without losing data. Tutorial with screenshots.2010-08-12T19:10:00+00:00https://jerel.co/blog/2010/08/reinstalling-ubuntu-with-separate-home-partition-without-losing-data-tutorial-with-screenshots<p>I recently updated my aquarium computer from Ubuntu 9.04 to 10.04. I have a separate home partition for my data and wanted to make sure it survived untouched. I didn't see any step by step instructions so I decided to write a tutorial in the hopes that it will help someone else.</p>
<hr />
<p>Get a live cd or a live usb stick to install from. Just a note: I needed a usb drive to install from and found that usb-creator is an excellent tool from creating a bootable install usb drive.</p>
<ol>
<li>Create the bootable usb drive to install from: <code class="highlighter-rouge">sudo apt-get install usb-creator</code></li>
<li>Run it from the terminal: <code class="highlighter-rouge">usb-creator-gtk</code></li>
<li>Select your downloaded ISO or your live cd</li>
<li>Select the destination zip drive that you want to put Ubuntu on.</li>
<li>You should now have a shiny new live usb drive!</li>
</ol>
<h4 id="install-ubuntu-without-disturbing-your-old-home-partition">Install Ubuntu without disturbing your old /home partition.</h4>
<p>Insert your usb drive or the live cd that you want to install from. You may need to hit F12 to when your computer boots and select the media that you wish to boot from.</p>
<p>Boot up all the way and then select Install from the desktop. I got a “irrecoverable error” when I did this so you may have to go to System->Administrator->Install. That worked for me. When you get to the partition manager part of the install select Specify partitions manually.</p>
<p><img src="/assets/blog/specify_partitions_manually.jpg" alt="specify_partitions_manually.jpg" /></p>
<p>Then choose your root partition. My root partition was 15GB as you can see. Click change.</p>
<p><img src="/assets/blog/change_partitions.jpg" alt="change_partitions.jpg" /></p>
<p>For the root partition select format and set the mount point as /.</p>
<p><img src="/assets/blog/edit_root.jpg" alt="edit_root.jpg" /></p>
<p>Now select the home partition and click change. Here is the critical part: Do not select format. Select the mount point as /home.</p>
<p><img src="/assets/blog/edit_home.jpg" alt="edit_home.jpg" /></p>
<p>I formatted my swap partition but I’ve gathered that it’s not necessary. Just select the partition as swap and it should be good.</p>
<p>Here is the final screen before the install starts. Check everything carefully! The home partition should <em>not</em> be in the list of partitions that will be formatted!</p>
<p><img src="/assets/blog/install.jpg" alt="install.jpg" /></p>
<p>There you go! You should have a nice new install of Ubuntu and still have all of your old data. (of course if you don’t have don’t blame me)</p>
<p>Feel free to let me know how it went in the comments.</p>