Installing Magento with MySQL Binary Logging On


If you’re installing Magento and run across an issue where the database cannot create triggers due to binary logging being turned on, it may be as simple as updating the MySQL global dynamic variable log_bin_trust_function_creators to ON.

Here is an example of the error received while binary logging is on:

ERROR 1419 (HY000): You do not have the SUPER privilege and binary logging is
enabled (you *might* want to use the less safe log_bin_trust_function_creators

At a MySQL Prompt, you can confirm Binary Logging is being used by the following command:

| Variable_name | Value |
| log_bin       | ON    |
1 row in set (0.00 sec)

By setting the log_bin_trust_function_creators variable to ON, you’re allowing users with CREATE ROUTINE or ALTER ROUTINE privileges the ability to execute those commands within MYSQL with Binary Logging enabled.

SET GLOBAL log_bin_trust_function_creators=1;

While this is notedly less safe, it gets around the hurdle of installing Magento and once installation is finished, you can always reset the variable back with SET GLOBAL log_bin_trust_function_creators=0;.

An alternative to using this approach would be to grant the account being used to install Magento with Super Privileges within MySQL.

Magento Install Error – missing local.xml.template file

If you ever run into the following error while installing Magento:

PHP message: PHP Fatal error: Call to a member function insert() on a non-object in .../app/code/core/Mage/Core/Model/Resource/Resource.php on line 133

Check to ensure you have the local.xml.template file in your app/etc/ folder.

This got me once while moving the code base from my development to test environment and then trying to install Magento from the test environment. My .gitignore file was configured to ignore this file when I pulled it into the test server from my repository.

PHP-FPM: Health Monitoring Status Page

Recently on our hosted site, some of our end users were receiving 502-Bad Gateway Errors sporadically. In researching the problem, we recognized that one of several web servers (being load-balanced across multiple servers) was constantly throwing 502-Bad Gateway errors back to the client. We discovered the NGINX process was running successfully, however the PHP-FPM process was hosed up and any php requests to that server was returning 502s.

To prevent this from occurring again, we’re updating our health monitoring (the load balancer monitors each server for availability) to test a PHP page instead of static content. This ensures both NGINX and PHP-FPM are up and responding to the clients, thus preventing the clients seeing 502 Bad Gateway errors thrown from NGINX.

PHP-FPM comes with it’s own health monitoring page that can be enabled through the configuration settings. By uncommenting (or adding) the pm.status_path setting in the etc/php-fpm.d/www.conf configuration file, and restarting the PHP-FPM process you should be able to invoke the page from a browser (ex:

Screen Shot 2015-05-21 at 1.17.17 PM

You can lock down the page from the public by only allowing requests from certain IP Addresses as shown below:

    location ~ ^/status$ {
        access_log on;
        allow 172.16.x.x;
        deny all;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root${fastcgi_script_name};

In the NGINX configuration above, we’re allowing only IP addresses and 172.16.x.x (in our case an internal IP address) access to the /status page. The request is being forwarded to the PHP-FPM backend process and that process returns results similar to the following:

pool:                 www
process manager:      static
start time:           21/May/2015:11:05:10 -0400
start since:          10956
accepted conn:        455
listen queue:         0
max listen queue:     0
listen queue len:     0
idle processes:       49
active processes:     1
total processes:      50
max active processes: 3
max children reached: 0
slow requests:        0

Twice Foiled by CRLF on Linux

I’m just going to get this off my chest.  There is no reason in the world why life has to be this difficult.  These computers were brought into our lives to easy the pain and yet, somehow, they just don’t measure up!  I have been bitten by this one twice.  It’s clearly evident had I blogged about it before now, I could have reclaimed a day of my life back.

So Git has this thing with carriage return/line feed characters and it should all make our lives easier when transitioning between a Linux environment and a Windows environment with our code.  I had given it a cursory look a while back and configured my own machine to, once and for all, prevent the mayhem that arises out of saving files with CRLF (Carriage Return Line Feed) characters onto a Linux Server.  Well, I forgot about it, and it took me nearly a day to figure out why I couldn’t run a simple bash script that calls a simple PHP page that processes a simple set of commands that ultimately is configured for a simple cron job to run every single day.

SYNOPSIS:  When on a Linux server, you cannot run a bash script if that script ends in an CRLF.  It fails miserably.  In my case my script file called:

php /do_important_things.php

The error returned (from having a CRLF at the end) was:

Could not open input file: /do_important_things.php

When you’re walking into an error that says: “Could not open input file”,  and you have no idea why it doesn’t work,  I can guarantee you, CRLF, is not the first thing that pops in your head.  In both instances, this stackoverflow question and answer by chown has proven invaluable to me!

I have since adapted the Git Repository to include a .gitattributes file (as recommended) that states the handling conditions of the line endings.  So this shouldn’t ever happen again…on this repository.

# Set default behavior, in case users don't have core.autocrlf set
* text=auto

# Explicitly declare text files we want to always be normalized and converted
# to native line endings on checkout.
*.c text
*.h text

# Declare files that will always have CRLF line endings on checkout.
*.sln text eol=crlf

# Denote all files that are truly binary and should not be modified.
*.png binary
*.jpg binary

Could you repeat that?

Tell me if you’ve heard this one before:  So we have a Master and Slave MySQL Database configuration in a Magento store and we suddenly receive a call from the call center agents stating that no one can add items to their shopping carts.  What gives?

Minutes later we hear that someone with administrative privileges, let’s call them an Administrator, pushes a change through URapidFlow to update their whole Product Catalog (of say several thousand products).  What could possibly go wrong right? {grins}

Further details:  In our Magento, like most everyone’s Magento I assume, we ask it to always read from Slave and always write to Master.  Not only are the shopping carts not persisting data*, but the administrative section doesn’t appear to be persisting any data.  No errors are given; it’s just simply taking your data as always as if saying “Thank you! We’ve got it from here.” but then immediately responds with: “Uh, could you repeat that?”  Strange.  Nothing in the logs indicate a problem.  Why doesn’t the changes take effect?

Okay, I’ve said too much already.  If you have figured it out  Don’t bother reading the rest, simply post your answer in the comments below.  I’ll kindly wait…

Ahem…  Done yet?  Great!  Now after a lot of toil and tussle we noticed the shopping cart finally began to persist it’s data.  This was really baffling as we didn’t do anything (well, we did a lot of things, but nothing that elicited any immediate feedback as to the resolution) in particular and out of nowhere things just start jiving again.  Now, I’m not one that typically says, “It’s up! I’m done!”, and I wasn’t about to leave thinking this Magento has a mind of it’s own, but it was late and I was tired. So…

Next day:  We notice some Admin test data that we threw in the night before (when Magento was misbehaving) had actually persisted as well.  In fact all the changes we made during our “down time” persisted.  This got us thinking:  “You know it may have been writing to the database the whole time but the other database just wasn’t getting the changes in proper time.”  I’ll take a moment to stop right there and smugly say to my colleague who came to this deduction in my very presence (and assistance I might add):  “YOU NAILED IT!”.  And he (we; I’d like to think “we”) did!

In talking with a truly stellar DBA from the planet that teaches MySQL, he kindly mentioned the night before he noticed “data drifting” or “lag time” between replication from the Master to the Slave on the servers but didn’t think it was significant at the time seeing as how we were chasing down a “persistence issue with the database not saving writes”.  And to his credit, he didn’t know anything about our architecture and how the world of Magento was configured (of course neither do we Smile).  But after speaking with him and getting a fantastic lesson on replication with MySQL, we surmised that all this toil and tussle boiled down to some Administrator kindly giving us a lead that they may be sole individual to bring old Maggie to her knees!

So, in summary, large amounts of concurrent writes on a Master MySQL Database may cause locking to occur and adversely affect the replication of data to the Slave database.