Twice Foiled by CRLF on Linux

I’m just going to get this off my chest.  There is no reason in the world why life has to be this difficult.  These computers were brought into our lives to easy the pain and yet, somehow, they just don’t measure up!  I have been bitten by this one twice.  It’s clearly evident had I blogged about it before now, I could have reclaimed a day of my life back.

So Git has this thing with carriage return/line feed characters and it should all make our lives easier when transitioning between a Linux environment and a Windows environment with our code.  I had given it a cursory look a while back and configured my own machine to, once and for all, prevent the mayhem that arises out of saving files with CRLF (Carriage Return Line Feed) characters onto a Linux Server.  Well, I forgot about it, and it took me nearly a day to figure out why I couldn’t run a simple bash script that calls a simple PHP page that processes a simple set of commands that ultimately is configured for a simple cron job to run every single day.

SYNOPSIS:  When on a Linux server, you cannot run a bash script if that script ends in an CRLF.  It fails miserably.  In my case my script file called:

php /do_important_things.php

The error returned (from having a CRLF at the end) was:

Could not open input file: /do_important_things.php

When you’re walking into an error that says: “Could not open input file”,  and you have no idea why it doesn’t work,  I can guarantee you, CRLF, is not the first thing that pops in your head.  In both instances, this stackoverflow question and answer by chown has proven invaluable to me!

I have since adapted the Git Repository to include a .gitattributes file (as recommended) that states the handling conditions of the line endings.  So this shouldn’t ever happen again…on this repository.

# Set default behavior, in case users don't have core.autocrlf set
* text=auto

# Explicitly declare text files we want to always be normalized and converted
# to native line endings on checkout.
*.c text
*.h text

# Declare files that will always have CRLF line endings on checkout.
*.sln text eol=crlf

# Denote all files that are truly binary and should not be modified.
*.png binary
*.jpg binary
Advertisements

Could you repeat that?

Tell me if you’ve heard this one before:  So we have a Master and Slave MySQL Database configuration in a Magento store and we suddenly receive a call from the call center agents stating that no one can add items to their shopping carts.  What gives?

Minutes later we hear that someone with administrative privileges, let’s call them an Administrator, pushes a change through URapidFlow to update their whole Product Catalog (of say several thousand products).  What could possibly go wrong right? {grins}

Further details:  In our Magento, like most everyone’s Magento I assume, we ask it to always read from Slave and always write to Master.  Not only are the shopping carts not persisting data*, but the administrative section doesn’t appear to be persisting any data.  No errors are given; it’s just simply taking your data as always as if saying “Thank you! We’ve got it from here.” but then immediately responds with: “Uh, could you repeat that?”  Strange.  Nothing in the logs indicate a problem.  Why doesn’t the changes take effect?

Okay, I’ve said too much already.  If you have figured it out  Don’t bother reading the rest, simply post your answer in the comments below.  I’ll kindly wait…

Ahem…  Done yet?  Great!  Now after a lot of toil and tussle we noticed the shopping cart finally began to persist it’s data.  This was really baffling as we didn’t do anything (well, we did a lot of things, but nothing that elicited any immediate feedback as to the resolution) in particular and out of nowhere things just start jiving again.  Now, I’m not one that typically says, “It’s up! I’m done!”, and I wasn’t about to leave thinking this Magento has a mind of it’s own, but it was late and I was tired. So…

Next day:  We notice some Admin test data that we threw in the night before (when Magento was misbehaving) had actually persisted as well.  In fact all the changes we made during our “down time” persisted.  This got us thinking:  “You know it may have been writing to the database the whole time but the other database just wasn’t getting the changes in proper time.”  I’ll take a moment to stop right there and smugly say to my colleague who came to this deduction in my very presence (and assistance I might add):  “YOU NAILED IT!”.  And he (we; I’d like to think “we”) did!

In talking with a truly stellar DBA from the planet that teaches MySQL, he kindly mentioned the night before he noticed “data drifting” or “lag time” between replication from the Master to the Slave on the servers but didn’t think it was significant at the time seeing as how we were chasing down a “persistence issue with the database not saving writes”.  And to his credit, he didn’t know anything about our architecture and how the world of Magento was configured (of course neither do we Smile).  But after speaking with him and getting a fantastic lesson on replication with MySQL, we surmised that all this toil and tussle boiled down to some Administrator kindly giving us a lead that they may be sole individual to bring old Maggie to her knees!

So, in summary, large amounts of concurrent writes on a Master MySQL Database may cause locking to occur and adversely affect the replication of data to the Slave database.