Search

Hi phoque, nice update!

I would suggest the insertion of date in the filename, ex:

(data|authors)-yyyymmdd-{hashcode}.sql

or

yyyymmdd-(data|authors)-{hashcode}.sql

I would suggest the insertion of date in the filename

You can do that, simply put

'format' => '%1$s-%3$s-%2$s.sql',

in the appropriate section in your config.php. See the README for additional information.

I’ve made it optional because overwriting the same file over and over again fits nicely in a version-controlled workflow like with Git.

Great. Thanks! =]

This is really useful! I love being able to click Save Data, commit the changes to git, then messing around with other admin changes. If I don’t like what I’ve done, I can click “Restore Data” and be right back where I was before.

Syncing data between a development and production site is so easy. I’m assuming that your warning refers to live sites with a lot of user activity:

Just note that this feature should never be used in a production environment.

Is this to avoid the possibility of overwriting user modifications to the production site? Or did you have other concerns with using this extension?

Is this to avoid the possibility of overwriting user modifications to the production site? Or did you have other concerns with using this extension?

Yes, and the fact that you cannot check if the import will be successful or fail terribly. Also, the way the script is splitting up several queries from a file into single ones (PHP accepts only one at a time) isn’t guaranteed to run 100% successfully.

Okay. Thanks for advising on the possible points of failure. I’m going to be trying out a few different database syncing options to see what works best. So far, I like the simplicity of this option the best. :-)

I’ve introduced another new feature: You may now set a special dump method: download or text. In both cases this extension will not touch the local files but either force your browser to download or simply display the dump.

This is handy for push-only-environments such as servers managed by capistrano or DeployHQ. You may then pull the database, import it into your local copy, work and test on it an finally deploy it the usual way.

As always, the new code can be found in the integration branch. An official release will come soon.

Is this to avoid the possibility of overwriting user modifications to the production site?

To ensure integrity during the process among all installations I recently also wrote a readonly_mode extension that prevents authors and events from changing data.

I’m just letting people know that I have changed the Repository-URL to match the directory name in extensions/: https://github.com/nils-werner/dump_db.

If you’re using this extension as a submodule you need to change the URL in .gitmodules and .git/config (basically replace the hypen with an underscore).

You must then change the remote in the directory extensions/dump_db itself by executing

git remote rm origin && git remote add origin git://github.com/nils-werner/dump_db.git

and then, after committing the change in .gitignore run

git submodule update --init

in your super-repository again.

Sorry for any inconvenience.

Maybe this is a stupid question, maybe this is a bug — I don’t know yet:

I’m using a shared repository where all the authors can export their current database using this extension. I expected the restore feature to take the latest available dump in the file system but for some reasons it doesn’t import anything.

I’m using a timestamp in my configuration and it seems like the extension is looking for a file with the current date and time and not for the newest file in my workspace.

Is it me who is doing something wrong or is it the extension?

I expected the restore feature to take the latest available dump in the file system but for some reasons it doesn’t import anything.

Ha, that’s what you get for not testing all cases. Luckily, it’s marked as unstable so I can say: Told you so! :-D

No, seriously. It’s a bug and you’re right in your “current date” assumption. I am not sure about the timestamp solution as a while (that weird placeholder instead of simply letting people use date() themselves) so maybe I will limit restoring to “static filenames”.

Any other ideas?

What about a configurable naming convention like in phpMyAdmin which interprets the naming string with the strftime function and has some other transformations?

Thank you for this one. Together with readonly_mode, it’s a great combination.

Dump DB updated to version 1.08 on 4th of March 2011

Compatibility with Symphony 2.2

Dump DB updated to version 1.08 on 4th of March 2011

I am new to symphony. can someone tellm e how to use dump db extension? thanks

It took me a while to figure out why, when I updated the Dump DB extension from version 1.06 to 1.08, I would encounter a fatal error when navigating to the Preferences page:

sprintf(): Too few arguments
    /Users/stephen/Sites/domain7/team-members/extensions/dump_db/extension.driver.php line 285

    280         }
    281         
    282     }
    283     
    284     private function generateFilename($mode) {
    285         return sprintf($this->format, $mode);
    286     }
    287 }

It had to do with the change to the format preference stored in the configuration file. I was using the following format, recommended in the documentation for version 1.06:

'format' => '%1$s-%2$s.sql'

For anyone wondering why updating results in the error above, update the manifest/config.php file with one of the format settings recommended in the updated documentation.

'format' => '%1$s.sql'

or

'format' => '%1$s-'.date('Ymd').'.sql'

If you use the latter, just make sure the date.timezone setting is set in your php.ini file to avoid PHP warnings.

Oh, right. Sorry about that. I figured that inserting date(whateveryoulike) made much more sense than using a sprintf placeholder.

@phoque

I'm getting a strange error when trying to update some queries from data.sql. I changed it's contents with some queries from nick's db_sync.sql :)

The queries work fine if I execute them from phpMyAdmin but they crash if they execute from data.sql via dump_db.

Do you have any idea what's going on?

These are the queries:

<li>-- 2011-04-04 09:43:17, Vlad Ghita, http://localhost/arc22/symphony/blueprints/sections/edit/17/
UPDATE arc_sections SET  `name` = 'Stiri',  `navigation_group` = 'Stiri si Evenimente',  `static` = 'no',  `handle` = 'stiri',  `hidden` = 'no' WHERE  `id` = 17;
UPDATE arc_fields SET  `element_name` = 'informatii',  `label` = 'Informatii',  `parent_section` = '17',  `location` = 'main',  `required` = 'no',  `type` = 'publish_tabs',  `show_column` = 'no',  `sortorder` = '0' WHERE  `id` = '59';
DELETE FROM `arc_fields_publish_tabs` WHERE `field_id` = '59' LIMIT 1;

;

Attachments:
mysql_dump_db.png

PHP's mysql is set to disallow running multiple queries at once so before executing I must fiddle them apart.

For that the queries must be delelimited by ;\r\n (a semicolon immediately followed by a carriage return, newline). In case of Nick Dunn's extension, Queries are concatenated using ;\n.

phoque, why not use preg_split() instead of explode()? I guess explode can be a tiny bit faster, but it is called only once anyway, so speed difference should not matter that much. I can see that you already have tried that, but with much more advanced pattern than the one used in call to explode. Have you tried it with something simpler, like:

$queries = preg_split('/;[rn]+/', $data, -1, PREG_SPLIT_NO_EMPTY);

Create an account or sign in to comment.

Symphony • Open Source XSLT CMS

Server Requirements

  • PHP 5.3-5.6 or 7.0-7.3
  • PHP's LibXML module, with the XSLT extension enabled (--with-xsl)
  • MySQL 5.5 or above
  • An Apache or Litespeed webserver
  • Apache's mod_rewrite module or equivalent

Compatible Hosts

Sign in

Login details