One of the significant features in version 2.0 of Zmanda Recovery Manager for MySQL is MySQL backups using Solaris ZFS. Doing MySQL backups using
filesystem snapshots has minimal impact on the MySQL databases. The MySQL databases are not available for updates for less than a second and the application impact is not dependent on the size of the database.
ZRM 2.0 can be downloaded from Zmanda downloads page. It supports all Linux and Solaris distributions. The documentation is available on ZRM wiki.
ZRM forums can be used to get questions answered about the project.
This article shows an example of how to install, configure, backup and restore MySQL databases using Zmanda Recovery Manager (ZRM) for MySQL on OpenSolaris. The example takes advantage of ZFS snapshots to do full backups.
We have significantly improved the snapshot interface for doing MySQL backups using ZRM. This work has been released as part of ZRM 2.0. ZRM 2.0 has couple of snapshot plugins - Linux LVM and Solaris ZFS that uses the interface.
Changes in ZRM 2.0:
* Solaris packages
* ZRM clients for Linux (RPM/Debian) and Solaris
* Tested on Gentoo distribution
* Improved Snapshot plugin interface
* Solaris ZFS snapshot plugin
* Backup of remote servers using snapshots
* Asychronous checksum computation for improved backup performance
* Backup compression on the fly for logical backups
Download it from Zmanda downloads page and give it a try. I will write more about how to use the plugin interface next week.
Last week, I described how to use the MySQL plug-in API to write a minimal ‘Hello world!’ information schema plug-in. The main purpose of that plug-in is to illustrate the bare essentials of the MySQL information schema plug-in interface.
In this article, I’d like to take that to the next level and demonstrate how to write an information schema plug-in that can access some of the internals of the MySQL server. For this particular purpose, we will focus on a plug-in that reports all the
SAVEPOINTs available in the current session. This
MYSQL_SAVEPOINTS plug-in may be of some value when debugging scripts and stored routines that rely on complex scenarios using transactions and savepoints.
In a forthcoming article, I will describe a few information schema plug-ins that are arguably more interesting, such as a plug-in to list the currently existing
TEMPORARY tables, user-defined variables, and the contents of the query cache. Although the plug-in described in this article may be of some use, its main purpose is to illustrate the minimal requirements for plug-ins that can access the server’s internals.
MySQL 5.1 offers an extremely useful feature called information_schema plugins. This feature allows dynamic runtime loading of a shared library into the MySQL server to implement a table in the information_schema database. The SQL standard (ISO/IEC 9075-11:2003) allows database implementations to extend the
information_schema. MySQL 5.1 transfers the possibility to do this directly to privileged database users so they can extend the
information_schema themselves, in any way they see fit.
In this article, we will demonstrate how to create a minimal “Hello, World!” MySQL information schema plugin. In a forthcoming article, we’ll demonstrate how information schema plugins may be used to report some of the server’s internals such as the contents of the query cache, session level objects such as the currently defined
TEMPORARY tables, user-defined variables and
Earlier today, Sun announced that it will be acquiring MySQL. This is an interesting turn of events in Oracle’s silent battle over MySQL. With Falcon still years away from being production-ready, and Oracle owning the most popular and stable storage engine for MySQL (InnoDB), what are your thoughts on this acquisition and the effects (both positive and negative) it may bring to end-users?
Several of the announcements can be found below:
I believe I found a new method to calculate the median in MySQL. I would not be surprised if this method has been figured out by somebody else already. However, I can’t seem to find any resources on the internet describing this method, so for now I flatter myself by assuming the method is original.
(Please do post your comments to this blog to correct me on that should I be wrong so I have a chance to rectify.)
The method I’m describing is a one-pass, pure SQL method. It does not require subqueries, cursors or user variables. However, it does rely on the MySQL specific functions
I’ll be maintaining a snippet for this method at MySQL Forge.
If you want to know what the median is, and how my snippet works, read on.
A few times now, I’ve been wanting to write this down. I know: a lot of people will go *shrug*. Others may find me pedantic. Some of will say I’m being a smart-ass. Whatever…but I just got to write down a few of these common misconceptions that keep floating around.
None of these misconceptions are really harmful - in most cases, they do not lead to misunderstanding or miscommunication. However, when you are writing about these subjects, you’ll often find that a sloppy definition you used in some place will bite you in the tail, and make it harder to explain something later on. So, that is why I from time to time get kind of obsessed with finding just the right words.
I’m not pretending I have the right words though. But there are a few informal ways of saying things that at a glance look right but are in fact wrong. Here’s a random list of some of them:
The Date dimension is a well known construct in general data warehousing. In many cases, the data for a date dimension is generated using a database stored procedure or shell-script.
Another approach to obtain the data for a date dimension is to generate it using an ETL tool like Pentaho Data Integration, a.k.a. Kettle. I think this approach makes sense for a number of reasons:
- When you tend to use a particular ETL tool, you will be able to reuse the date dimension generator over an over, and on different database platforms.
- You won’t need special database privileges beyond the ones you need already. Privileges for creating tables and to perform DML will usually be available, whereas you might need to convince a DBA that you require extra privileges to create and execute stored procedures.
In addition to these general considerations, you can pull a neat little trick with Kettle to localize the data and format of the date attributes. I wouldn’t go as far as to say that this feature is Kettle specific: rather, it relies on the localization support built into the java platform and the way you can put that to use in Kettle transformations.
List of top 5 items that have to be considered before deciding on a MySQL backup implementation are:
* How fast and how easy do you want the MySQL Recovery process to be?
* What will be the impact of MySQL Backup process on your Application?
* What will your backup configuration look like? (the What, Where, When, and How of MySQL Backup)?
* How will you manage your backup process and backed up data?
* What kind of tracking, reporting and compliance requirements does your business have from your MySQL backup implementation?
The white paper provides detailed insights about the above considerations. Your feedback is welcome.
We are working on Zmanda Management Console for our MySQL backup product line: Zmanda Recovery Manager (ZRM) for MySQL. ZRM for MySQL is an enterprise backup and recovery solution for MySQL.
Disclaimer - views expressed in this blog (and this entry) are my own and do not necessarily reflect the views of MySQL AB
Ever since I wrote my blog entry about Google Gears and the query tool for the browser embedded offline Google Gears database service, I have been wondering how MySQL might fit in here.
Google Gears is an open source browser extension created by Google. It provides a framework that allows the creation of offline webbrowser applications. At the moment it provides three services:
- Local Server
- A data store for static resources. This allows efficient caching of images, scripts and webpages
- An embedded relation database management system, based on SQLite. SQLite should be familiar to most PHP developers, as it is being shipped with PHP since version 5
- Worker Pool
- A form of threading support inside the browser that allows webapplications to initiate long running processes without hampering the responsiveness of the user interface.
Users only needs to install the extension in order for the browser to be able to access the services when browsing pages.
If you want to get an immediate taste of the database service, be sure to install Google Gears and take a look at my offline, browser-based database client.
There is a popular myth about the SQL
GROUP BY clause. The myth holds that ’standard SQL’ requires columns referenced in the
SELECT list of a query to also appear in the
GROUP BY clause, unless these columns appear exclusively in an aggregated expression. MySQL is often accused of violating this standard.
In this article I will attempt to debunk this myth, and to provide a more balanced view regarding MySQL’s treatment of
GROUP BY at the same time.
Shortly before MySQL Users Conference I announced that I would be cover new ground in table logs management.
I am keeping that promise, and in addition I am also showing some related hacks.
The announced facts from last year usability report were that you can't change log tables at will, as you can do with log files, and you can't change the log table engine to FEDERATED. Both claims, as it turned out, were incorrect. You can do such things, albeit not in a straightforward manner. As a bonus side effect, you can also:
- add triggers to log tables;
- filter log tables depending on user defined criteria, such as query type, user database, or time;
- centralize logs from several servers.
Thought I’d pass along this note I received, that may be of interest to the MySQL addicts out there:
>> Sasha Pachev, whose book Understanding MySQL Internals was released
>> last month by O’Reilly, is leading an online seminar at MySQL AB on
>> “Improving query performance through a better understanding of the
>> You can present Sasha with your own SQL queries during this webinar
>> and learn how to interpret output of the EXPLAIN command to improve
>> your performance. This webinar is also a useful accompaniment to
>> Understanding MySQL Internals, which contains extensive information
>> on EXPLAIN and the behavior of the optimizer exposed by it