Starting up Lean

Startups are often scary for the uninitiated. People often feel like running away when anyone suggests they get involved with creating a startup, and for good reason. The grand majority of startups in the past have failed, many of them spectacularly. Failure often means ruined credit, ruined reputation, and ruined careers, both for startup founders and the people they managed to dupe into working for them. So why are there so many failed entrepreneurs who came back and successfully conquered in future ventures? The answer is simple: people learn from their mistakes.

This is at the center of “The Lean Startup”, a book by Eric Ries that introduces the concept of planning for failure to those who would rather just succeed the first time. The book is a decent read on the subject, but I would also suggest “Nail It Then Scale It” by Nathan Furr and Paul Ahlstrom, which translates the idea into a series of steps for entrepreneurs to follow on their paths to success.

The overall movement has seen a lot of success in the form of various high-profile startups that were either sold to larger companies for sizable amounts or that stand, to this day, on their own two feet in the face of constantly shifting economic and social tides. These companies differ from traditional businesses in that they not only embrace failure, they plan for it. They build their entire approach around an idea that IDEO general manager Tom Kelly canonized with the statement “Fail often, to succeed sooner.”

This idea, that failure is not only good but, in some cases, desirable, turns conventional wisdom on its head. Making mistakes in traditional businesses can get you fired, whereas making mistakes in a Lean startup (and showing that you learned from them) may secure you a promotion.

In studying up on the Lean startup movement, I’ve found a lot of responses that are critical of one or many points that Eric Ries (and others) make in their books on the subject. Most of the argument center on cases where not every aspect of the processes described by Ries et. al. don’t fully apply. My response to these people is very simple: any process or methodology you choose to adopt as your own should be modified to match your needs. Failing to do so fails to take advantage of inside knowledge you have that others cannot possibly have and lands the responsibility for the scrapped business or engineering idea squarely on your shoulders.

In short, the Lean startup movement has ideas and principles that apply universally. The principles of success are the same no matter what area of your life you apply them to. So take it with a grain of salt, your grain of salt, and get in the trenches. As Mrs. Frizzle from the Magic School Bus says, “Take chances, make mistakes!”

Posted in Uncategorized | Leave a comment

Security is an Illusion

No encryption algorithm is provably secure.

That’s something of a shocker to people who depend on security features built into their browsers to shop, bank, and play online. We assume that because our account balances remain untouched and because our social media accounts aren’t posting stuff we don’t want them to post, we’re safe and good to go. Nothing could be further from the truth. Let’s take a quick look at three points at which an attack is likely to occur with reference to an encryption scheme.

The Scheme

Cryptography experts don’t like the notion that it’s impossible to prove that an encryption scheme is secure. To get around this, they speak of things like difficulty assumptions, attack vectors, and complexity. A lot of what they say is perfectly valid. For example, assuming that factoring large (in the range of 100+ decimal digits) highly composite numbers remains a difficult task for cutting edge computer hardware, RSA encryption (the basis of most online communication at some level) will remain equally difficult to crack. The problem is that the difficulty of the factoring problem is not guaranteed to remain as it is today. In fact, it could change overnight with any number of potential discoveries in mathematics, and there is literally nothing out there that could potentially replace RSA if it gets taken down.

If you remove all assumptions from the hypothetical situations used to define the security of encryption algorithms (i.e. put them out in the real world), you quickly find that it is simply impossible to provide any assurance of security whatever. For now, we depend on encryption for virtually everything that happens online, but just because things are going okay now doesn’t mean that someone hasn’t or won’t figure out some ingenious way of cracking the strongest of encryption schemes like the shell of a hollow egg.

The Implementation

Recent events have brought to our attention that some implementation recommendations by government agencies like the NSA are riddled with back doors and dirty hacks. The result is that, even if we did have a perfectly secure encryption algorithm, we’re still screwed because of something in the implementation of it.

An example of this is Heartbleed, a bug that was discovered in OpenSSL that left millions of machines open to attack. An attacker could retrieve the secret keys necessary to decrypt messages sent between machines. This was a serious problem, and the reputation of OpenSSL was severely damaged for a while. It is hard to imagine that similar problems don’t exist in virtually every piece of encryption software out there.

The Human

The weakest part of every security system is the human element. There are psychological and cultural holes we all have that can easily be taken advantage of to gain access to privileged places or information. Most of the time, we never know what hit us until it is far too late.

Some common things to look out for include passwords that are too simple (like “password”, your birthday, a favorite line from a movie, or anything else that contains whole words from your native language), lack of password protection on mobile or other computing devices, and the desire to be helpful to others when we sense there is a need.

Conclusion

Even if you do choose good passwords, keep everything password-locked, and watch your back, chances are that eventually, someone, somewhere, is going to get the best of you. The only real advice I can give beyond the obvious is this:

Don’t be stupid.

Posted in Uncategorized | Leave a comment

C++ 11: Welcome to the New World

I like C++. Always have. It’s the first programming language I learned. I purchased a C++ programming text from a second-hand store, read it in a month, and figured out that my life was about to change. And it did. I’m now a computer scientist/software engineer/web developer/number theory enthusiast. But there came a point at which it felt like the only secrets remaining in C++ were those weird quirks that make it so hard to do useful things.

Enter C++ 11, which has, once again, changed my life. For my senior project at my university, I am constructing a live implementation of Eulerian video magnification (created by the good folks at MIT) and will be releasing it on GitHub when it’s finished. I’m writing this in C++ for efficiency, but I was continually running into problems. For example, with POSIX threads, it’s not possible to spin off a method of a class as a thread. Boost is nice, but I wanted to try C++ 11 threads first. Come to find out, these are super nice!

I then wanted to both call a method and, upon completion, modify a member variable of another class. This was difficult to do in classic C++ due to interesting interplay of class scopes. However, C++ 11 introduced lambda functions to C++. Suddenly, I don’t have to put a tiny global function in my code, thereby reducing readability. That function would only ever be called in one place anyway (when the threads are created), so it makes sense to create an anonymous function right then and there like we would in Java.

Needless to say, I’m super excited about C++ 11 and will be digging deeper into it from here on out.

Posted in Uncategorized | Leave a comment

Proving the security of an encryption algorithm

Technically speaking, it’s impossible to prove the security of any encryption algorithm. As long as it’s unknown whether P is equivalent to NP, we have no idea of the problems that encryption algorithms depend on for their security can be solved in ways we just haven’t found yet. This can be disconcerting to some because it literally means the only way to “prove” a new encryption algorithm is secure is by gathering a lot of evidence from people who have tried to break it that supports the idea that is is, in fact, hard to break.

But what if an encryption algorithm depended on something other than the difficulty of factoring for security? Let’s assume, for example, that there is a massively large key space that is accessible through a key generator, and that every attack vector on algorithm X that uses keys from this key space degrades into a key space search. In this case, the most efficient way of breaking algorithm X would be the attack vector that narrows the key space search to the smallest possible subset of keys. If the original key space is large enough, searching through this subset will, in theory, still be an incredibly large problem. Even if you reduce a 64-digit number down to a 32-digit number of keys to search through, and even if you can check a billion keys per second, it will take longer than the age of the universe to crack algorithm X.

The problem of proving the security of algorithm X then becomes one of showing that all attack vectors degrade into a key space search. If such a feat is possible, it would mean that no attack vector can possibly take advantage of anything else about the algorithm to mount an effective brute force attack. In other words, if all attack vectors degrade into a key space search, then no attack vector exists which depends on any other problem. If this is the case, and if the key space is sufficiently large, a proof of the security of the algorithm may be possible.

Breaking such an algorithm for a single key would likely require far more time than the age of the universe. From an application-agnostic point of view, this is essentially the holy grail of encryption algorithms. However, other aspects of encryption need to be considered. For example, is it a public key or symmetric key algorithm? Public key algorithms usually depend on the difficulty of factoring large numbers for their security, as is the case with RSA encryption. Symmetric key encryption, on the other hand, may well be able to support such an algorithm as algorithm X.

I am currently working on a paper that specifies and explores the properties of a potential algorithm X. From our calculations so far, it actually would take longer than the age of the universe to brute-force decrypt a message created by this algorithm, assuming, of course, that the key is sufficiently large (at least 16-bytes in this case).

The question we have at this point is whether it is possible to show that every possible attack vector degrades into a key space search. So far, it seems like that is the case, especially when you consider the fact that an attacker is incredibly unlikely to have any of the unencrypted message. Assuming, however, that the attacker does have some of the original message, all attack vectors we have identified that would take advantage of this still degrade into key space searches, though the size of the key space is virtually impossible to calculate for such a case. In other words, under even extremely precarious assumptions, all attack vectors we’ve identified degrade into a key space search at some point.

Our algorithm X may well be one of the most secure encryption algorithms at the moment. What, then, do we do with it?

Posted in Uncategorized | Leave a comment

Eulerian Video Magnification

For the uninitiated, Eulerian Video Magnification (EVM), the combination of algorithms that reveals the tiny changes in relatively static video feeds, is nothing short of magical. A lot of rumors are running around about what it can and cannot do. I intend to explain enough about it to put a few of these to rest.

EVM is a collection of algorithms that “magnifies” the subtle changes in a video feed. Depending on which filters it uses, it can be set to focus on color or movement. It is a project out of MIT (patent pending) that, in my opinion, is incredibly promising. Potential applications include baby monitors, portable lie detectors, and medical applications, though these will probably only become a reality after it has been thoroughly tested. But the question remains, how does it work and what does it require?

The how part is a bit complicated. It involves a list of intimidating image processing algorithms. I will list these one by one and describe their function. I will then explain what you need in order to get it to work.

EVM begins by splitting a video into two buffers: the original video, and the video that will be processed by the algorithms. The original video is stored on the side while the process video runs through the algorithms. The first algorithm is one of two image processing algorithms that make use of image pyramids. An image pyramid is simply a method of removing every other row and column in an image while still preserving as much detail as possible, resulting in an image that is 1/4 the area of the original. Gaussian pyramids are used to highlight color change, while Laplacian pyramids are used to highlight movement.

After the processed video feed is shrunk down by the pyramid algorithm, it is fed into a bandpass filter. A bandpass filter consists of three steps in this case: we begin by running an FFT (Fast Fourier Transform) on each channel of each pixel in the video buffer. This results in a buffer that represents the frequencies measured for the changes that occur in each of the pixels over time. We use the frame rate to figure out which frequencies we want to keep (based on high and low values) and we zero out the frequencies that occur outside of that range; in other words, the high and low values define a “band” of frequencies we want to pass through the filter, and all other frequencies are set to zero. After that, the last step is to undo the FFT by applying the inverse FFT. The result is a video in which only the changes that occur inside the target frequency are allowed through (in an ideal universe, but the reality is much more complicated than that and far beyond the scope of this post).

At this point, an amplification factor is applied to the processed video buffer. This factor is multiplied against the contents of the video buffer, resulting in the amplification effect in the final video. Once this is finished, we need to increase the size of each image in the buffer back to the original size. We do this by reversing the pyramid algorithm we used at the beginning. Once the buffer is returned to its original size, the processed video is combined with the original video to produce the output video that you see in the YouTube videos.

Because the algorithm runs on a buffer, it is difficult to write a program that runs EVM on a live video feed. All publicly available open source code that I know of requires the user to first record a video to a video file, then run that video file through EVM to produce a new video file with the desired effects. The problem with this is that you can’t check your settings live; you have to wait until the video is “compiled” in order to view it, and the process can be somewhat tedious.

Now for what is required to do this: EVM can be run using just about any web camera, but the higher the quality of the web camera, the better the results. Some web cameras introduce a lot of noise into a video feed, and these are not ideal. I suggest for the serious exploration of EVM that you purchase a better web camera, though modern laptops (those purchased within the last year or so) and modern smart phones usually have good enough cameras built in. No, you don’t need a Kinect in order to do EVM; it is my suspicion that Microsoft, in the rumors about Xbox One’s capacity to measure heart rate, has either developed a method of doing so that doesn’t require EVM and makes use of infrared light, or that they’ve come to an agreement with the folks at MIT that allows them to use it without the threat of future lawsuits.

The current legal status of EVM is that MIT has a patent pending on it. They do, however, provide source code for free that can allow computer savvy individuals to experiment with it. They do this under the agreement that the code not be used for any commercial purposes and that if an entity is asked to do so, they must cease making their EVM implementation available. However, it is available (and legal) to be used for research and development purposes, and if you want to use it in a commercial application, you can contact MIT to arrive at an agreement.

As a computer science student, I am required to produce a senior project. The subject of my senior project is EVM. I plan to implement it in C++ using the OpenCV library. It will use a circular frame buffer that can contain enough frames to cover a few seconds of video. While there will be some delay between recording and displaying video, it will be a far better workflow than is currently available through publicly released code. Once finished, a user will be able to switch between filters, adjust amplification level, and adjust the bandpass high and low levels as they watch the video feed. It is hoped that this will facilitate the live study of EVM, which could potentially reveal many more applications.

Imagine, a “poor man’s x-ray” created using EVM on a video feed from different light frequencies! Imagine portable medical devices (Star Trek’s Tricorder for example…) that could one day be used to diagnose problems! Imagine being able to use your cell phone to gauge how nervous people get when you are around! Honestly, the potential applications of this are enormous. It does have a long way to go between now and then. My hope is that my project, once I make it available, will aid in the process of improving the art.

Questions? Comments? By all means!

Posted in Algorithmics | 3 Comments

Web Content Filtering in Ubuntu 14.04

This tutorial covers the setup for web content filtering in Ubuntu 14.04 using Dansguardian, Squid, and iptables.

The first step is to install the needed software:

$ sudo apt-get install squid dansguardian iptables clamav-freshclam

We’ll configure squid first. The file of interest is in /etc/squid3/squid.conf. Using your favorite text editor, make sure the following lines are set:

...
http_port 3128
...
always_direct allow all

NOTE: do not use the “transparent” setup (placing the word transparent after the port number in the squid config file). This causes all sorts of strange problems. For me, https worked fine but http was blocked completely.

Next, configure Dansguardian (/etc/dansguardian/dansguardian.conf). First, comment the line at the beginning of the file that begins with “UNCONFIGURED”. Then add (or modify existing lines to look like) the following lines:

filterip = 127.0.0.1
daemonuser = 'proxy'
daemongroup = 'proxy'
accessdeniedaddress = 'http://localhost/cgi-bin/dansguardian.pl'

It’s important to note that if you do any web development, you will want to avoid running Dansguardian on a standard port (like the default of 8080). I prefer 8888:

filterport = 8888

Save the file and close your editor. Now we’re ready for iptables. Enter the following commands in the terminal:

$ sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -m owner --uid-owner proxy -j ACCEPT
$ sudo iptables -t nat -A OUTPUT -p tcp --dport 3128 -m owner --uid-owner proxy -j ACCEPT
$ sudo iptables -t nat -A OUTPUT -p tcp --dport 80 -j REDIRECT --to-ports 8080
$ sudo iptables -t nat -A OUTPUT -p tcp --dport 3128 -j REDIRECT --to-ports 8080

Be sure to replace the port numbers in the last two commands with the filterport number you set in the Dansguardian config.

Now, if you restart Dansguardian and squid, it should work ok, but after reboot, it won’t keep working because the iptables settings won’t persist. To keep them around, install one last package:

$ sudo apt-get install iptables-persistent

This package will ask you whether to save the current settings. Indicate that you want it to save for both ipv4 and ipv6. Then restart Squid 3 and Dansguardian:

$ sudo service squid3 restart
$ sudo service dansguardian restart

Viola! Your web content filtering system should be up and running. For more protection, you can download a blacklist from somewhere like here. Extract the lists and use the terminal to copy them to the right place and set permissions:

$ sudo mv blacklists /etc/dansguardian/blacklists
$ sudo chown -R root:root /etc/dansguardian/blacklists

Restart Dansguardian once more to have the lists take effect. You should be able to load regular websites fine, and if you try to access anything particularly questionable, Dansguardian will replace the page with a blocked site notification.

Enjoy!

Posted in Linux, Ubuntu | 4 Comments

Toshiba Satellite Touchscreen Laptop

This is a note for anyone who is unfortunate enough to run into the same problem I did with the Toshiba Satellite P55t-A5202 touchscreen laptop.

Symptoms: After a month or two of flawless operation, the laptop suddenly fails to boot. The power button elicits absolutely no response. The battery light is on, but it glows amber and doesn’t flash. Nothing in the user guide covers this situation, because everything it talks about involves a flashing battery light. A call to technical support also yields nothing except perhaps a suggestion to leave the AC adapter plugged in for 24 hours while the battery charges. You note that, even after the hour or two it normally takes to charge the battery, the charging light is still on and the power button is still unresponsive.

Solution: If you close the laptop and turn it over, you will notice that on the bottom of the laptop, very near where the indicator lights are, there is a small pin hole. It looks just like the pin holes for other devices that require reset buttons. Stick a pin in it, push the button the hole leads to, and the orange light turns off. Suddenly your computer boots up like it always did and pretends like nothing happened.

The cause of this problem is probably an attempt on Toshiba’s part to be clever with the power system of their laptops; instead of letting a laptop die with a power surge, the laptop is equipped with something of a breaker. It can’t handle huge power surges like a lightning bolt, but it definitely gets flipped when a small one comes through the wire. Pushing the button resets that breaker and your computer comes to life again.

I am still a little angry about this. Toshiba should have had some sort of documentation on this feature. I believe it would be very easy for a college student like myself to run into this problem several times due to the frequent relocation of their computers for classes and coursework. I’m not entirely sure if I appreciate the feature, unless, of course, it saved my computer’s hardware from getting fried like the last computer I had.

Hope this helps someone.

Posted in Ubuntu | 5 Comments

Object-Oriented JavaScript, Part 1

JavaScript takes a different approach to OOP than other languages. It replaces the class-based system of languages like C++ and Java with a prototype-based approach. See here for an excellent discussion of the differences between these two approaches. For now, the most important thing to note is that in JavaScript, there are no classes, so if you come from a C++, C#, or Java background, you will need to disregard much of what you already know in order to effectively work with objects in JavaScript.

To create an object in JavaScript, all you need to do is create a function that returns “this”, call it, and store the returned value in a variable:

function Mammal() {
	/* add attributes and functions to "this" */
	return this;
}

var dog = new Mammal();

Because everything, even functions, are objects in JavaScript, we can store variables and functions on the above function’s “this”:

function Mammal() {
	this.sound = "Grrrr."
	this.makeSound = function() {
		console.log(this.sound);
	}
}

Now, when we create a Mammal object and call its “makeSound()” method, the object’s “sound” member variable will be printed to the console.

This approach is nice, but it has one major problem: every time we create a Mammal object using this function, that object will have its own allocated memory to store the function instructions for the “makeSound()” method. In some cases when we create huge arrays of our custom objects, this is incredibly inefficient. It would be much better if we had a way of defining objects that allowed us to keep only one copy of the methods attached to those objects, no matter how many individual instances we have.

Enter the prototype. In JavaScript, each object has an associated prototype. You can look at the prototype as the part of the object that other kinds of objects can copy and modify in order to take advantage of inheritance. Not only does this have huge benefits with reference to memory savings, it also gives us the ability to define object hierarchies that can be modified during execution (not often a good idea, but it CAN come in handy every once in a while).

So, how does one access and edit the prototype of an object? There are two general ways of doing this. The first is to treat prototype as a hash on the Mammal object as follows:

function Mammal() {
	//construct stuff that you don't want other objects to inherit
}

Mammal.prototype = {
	sound: "Grrrr.",
	makeSound: function() {
		console.log(this.sound);
	}
}

This is great, but sometimes you don’t want to replace the prototype that is already there. In other words, sometimes you want one object (a cat, for example) to inherit the behavior and default data of the parent object (Mammal). In these cases, we would declare individual functions and member variables as attributes of the object’s prototype:

Mammal.prototype.eat = function() {
	console.log("Nom nom nom!");
}

As stated before, we can store both data and functions in the prototype of an object. Everything we store in the prototype can be inherited by other objects by declaring the prototypes of those objects to be equal to the parent object type. Using this approach, we can create a Dog object as follows:

function Dog() {
	return this;
}
Dog.prototype = new Mammal();
//And later...
var fido = new Dog();
fido.makeSound();

Now the Dog object type has the capacity to do anything that the Mammal object can do, plus whatever else we add to its prototype. Once we define methods for things that all dogs can do (Like the “goToHeaven()” method), we can get more specific with individual kinds of dogs, like the “rollOver()” method for trained dogs or the “superDrool()” method for those dogs that just can’t stop… Anyway, after defining all of this individual stuff on sub-objects, we can still access the methods of the Mammal object and they still behave the same way, unless one of the sub-objects has overridden one of Mammal’s functions.

The best way to nail this down is to get an interactive console (I suggest installing Node) and play around with objects to figure out what does and does not work. Once you get the basic idea down, you’ll find that the way that JavaScript handles objects is incredibly intuitive and makes perfect sense for scripting languages.

Posted in Web Development | Tagged , | Leave a comment

Ubuntu 13.10 Intel Graphics Killed by OpenCV

If you, like me, installed OpenCV from the Ubuntu package manager on a computer with an Intel graphics card, you might have been thinking you bricked your installation and would have to start from scratch. Thanks to some good work by a few other Ubuntu users, I as able to remedy this problem and get my desktop back. Moral of the story: don’t install OpenCV from the repos. Somebody in charge of the package dependencies made it depend in a round-about way on NVidia packages, which breaks the desktop for Intel graphics chips.

My symptoms are these: when I booted my Dell Inspiron laptop with an Intel graphics card, the boot process would make it all the way to the login screen. I would enter my credentials, log in, and be met by a blank black screen with my mouse cursor. I could move the pointer around, but without a window manager or file manager or anything, I couldn’t do anything graphically.

The Fix

This fix had two parts: first, I had to remove the offending packages. This was done by running the following command:

$ sudo apt-get install ocl-icd-libopencl1

…and then running:

$ sudo apt-get autoremove

…Which removes all the redundant packages that libopencv-dev brought in, leaving you with a generic version of libopencl1. This takes care of the nvidia package problems, but when you reboot, you still don’t get Unity back. What’s up with that?

Unity runs as a Compiz plugin in Ubuntu. In some cases, installing the wrong packages like those above will cause the settings in Compiz that start Unity to be unset. You will need to install the Compiz configuration tool:

sudo apt-get install compizconfig-settings-manager compiz-plugins-extra

Then, after reboot and login (to the graphical desktop that lacks Unity), type Ctrl+t to open a terminal and run:

$ ccsm

…to start the manager. Look for the Unity plugin when the manager comes up, click on it, and make sure it gets enabled.

…And that’s it! Your machine should be back up and running right without needing to re-install the desktop or unity and without having to re-install the OS. In the mean time, let’s hope that whoever made the mistake of making opencv-dev depend on all that stuff gets their act together and fixes it.

Posted in Ubuntu | 2 Comments

Ubuntu, Apache2, and Ruby on Rails with Passenger

This tutorial aims to explain how to get a Ruby on Rails site “deployed” on a local machine strictly for development purposes.

1: Install Apache

You can do this in several ways, but here is how I like to do it, because I need Apache to speak PHP too…

$ sudo apt-get update && sudo apt-get install lamp-server^ -y

(NOTE: the caret after ‘lamp-server’) is not a typo!)

If you don’t want the entire LAMP server stack, you can simply install apache:

sudo apt-get install apache2 apache2-mpm-prefork apache2-prefork-dev -y

Navigate your browser to http://localhost, and if you see something like:

Screenshot from 2014-01-07 16:56:32

…your server’s working.

2: Install Ruby and Rails

This one’s a bit more difficult; you can go the Ubuntu way and install Ruby from the Ubuntu package manager, but you’ll end up with an ancient version (by the standards of the Rails community). The best tutorial by far that I have found on this is at the link below:

Setup Ruby on Rails on Ubuntu 13.10

After following the steps in the GoRails link, the rest of this should go smoothly.

3: Install Passenger

Phusion Passenger is an excellent package that gives Apache2 the ability to run Rails applications. It is actually super simple to install:

gem install passenger && passenger-install-apache2-module

The second command (after the ampersands) takes care of checking for dependencies. It will give you instructions to follow in order to get passenger running; these are highlighted in red in the console output. Follow those instructions exactly, with one exception: if you see an error complaining about Apache2 not being compiled with a usable “MPM”, don’t worry; just hit “Enter” again to continue with the installation anyway.

Once the installation of Passenger is complete, you will see two important chunks of output in the console: the first is a few lines of code that you need to put into the Apache2 config, and the other is an example virtualhost setup.

Copy the lines that are meant to go into the Apache2 config using ctrl-shift-c. Then run:

sudo gedit /etc/apache2/apache2.conf

You can replace “gedit” with your editor of choice. Scroll to the bottom of the file and paste in the lines you copied. Also, you may consider putting ServerName localhost into this file as well if you are getting server name errors when you start Apache2.

You may also wish to copy the example virtualhost setup to a temporary file for use in a moment.

4: Virtualhost Setup

By default, Apache2 only has permissions to operate inside of “/var/www/”. This can be annoying, because we don’t want to have to use “sudo” every time we want to edit something in our projects. I tackle this problem in two ways:

1: $ sudo usermod -a -G www-data $USER
2: sudo ln -sT /home/$USER/path/to/my_project /var/www/my_project

The first command adds your user to the www-data group, meaning that group permissions applying to www-data will also apply to you. The second creates a symlink in /var/www called “my_project”. This symlink allows Apache2 to “see” what is in your project, which is the last thing we needed to do before adding in the virtualhost file for your project.

Create your new virtualhost file:

sudo gedit /etc/apache2/sites-available/my_project.conf

As always, replace “gedit” with your editor of choice.

Copy the following virtualhost configuration into your editor and change the document paths to lead to your project’s public directory through the symlink you just made:

<VirtualHost *:80>
    ServerName local.trackerx.com
    RailsEnv development
    DocumentRoot /var/www/my_project/public
    <Directory /var/www/my_project/public>
        AllowOverride all
        # MultiViews must be turned off.
        Options -MultiViews
    </Directory>
</VirtualHost>

You may also choose to change the environment variable to “production” or whatever else you have set up in your project.

Wrap It Up

This setup should allow you to view your projects without having to run the development server every time you want to get to work, and it allows you to put your projects in your home directory or wherever else you want to put them.

Note: you will probably have to restart Apache2 on a regular basis. When you install a new gem in your project, run:

sudo service apache2 restart

to get the server running again.

Happy coding!

Posted in Web Development | Leave a comment