Портфолио
Каналы новостей
Planet MySQL
Planet MySQL - https://planet.mysql.com

  • New Continuent Tungsten Replicator (AMI): The Advanced Replication Engine For MySQL, MariaDB, Percona Server & AWS Aurora
    Discover the new Continuent Tungsten Replicator (AMI) – the most advanced & flexible replication engine for MySQL, MariaDB & Percona Server, including Amazon RDS MySQL and Amazon Aurora We’re excited to announce the availability on the Amazon Marketplace of a new version of the Tungsten Replicator (AMI). Tungsten Replicator (AMI) is a replication engine that provides high-performance and improved replication functionality over the native MySQL replication solution and provides the ability to apply real-time MySQL data feeds into a range of analytics and big data databases. Tungsten Replicator (AMI) builds upon the well-established, commercial stand-alone product Tungsten Replicator and offers the exact same functionality, but with the convenience that comes with an ad-hoc online service: cost-effective, rapid and automated deployment. How to get started – 14-Day Free Trial Users can start with the new Tungsten Replicator by trying one instance of the AMI for free for 14 days (AWS infrastructure charges still apply). Free trials will automatically convert to a paid hourly subscription upon expiration. What’s new in this release? New Targets: The new Tungsten Replicator (AMI) now supports the full range of targets in line with the stand-alone Tungsten Replicator product. Latest Tungsten Replicator: The new AMI launches the latest 6.1.1 release of Tungsten Replicator. Improved Installation Wizard: The installation wizard now provides an easier way to configure additional advanced options such as SSL and filtering. Cluster-Slave Support: If you are an existing Tungsten Clustering user, the AMI will now enable you to configure a Cluster-Slave to replicate, in real-time, from your existing Tungsten cluster to any of the available targets. Simplified Pricing: With Tungsten Replicator (AMI) you can now easily mix and match source/target combinations. We have split out the Extractor into its own dedicated AMI; you can then pick and choose the target AMI suitable for your environment. This enables much easier configuration and simplified management. In addition, this allows for easier configuration of fan-out topologies. Replication Extraction from Operational Databases MySQL (all versions, on-premises and in the cloud) MariaDB (all versions, on-premises and in the cloud) Percona Server (all versions, on-premises and in the cloud) AWS Aurora AWS RDS MySQL Azure MySQL Google Cloud SQL Replication Target Databases Available as of Today OLTP AWS Aurora MySQL PostgreSQL Oracle Analytics AWS Redshift Vertica Also Cassandra Clickhouse Elasticsearch Hadoop Kafka MongoDB Top Product Benefits Multiple Targets: Enabling users to replicate directly into popular analytic repositories such as MySQL (all variations), PostgreSQL, AWS RedShift, Kafka, Vertica​, Hadoop, Oracle, Cassandra, Elasticsearch and Clickhouse. Advanced Filtering: Filter your replication in flight at schema, object or even row/column level. Filter at extraction, or during apply. Cost not tied to transaction volume: Unlimited real-time transactional data transfer to eliminate escalating replication costs of ETL-based alternatives. Multi-master: Global, real-time transaction processing with flexible multimaster replication configurations between clusters (MySQL as source and target only). Top Product Highlights Tungsten Replicator (AMI) features: platform agnostic, real-time replication between database instances, filtering of data down to row-level, parallel replication and SSL for added security. Tungsten Replicator (AMI) includes the ability to apply data into many replication targets in addition to MySQL (all flavors and versions) such as PostgreSQL, AWS RedShift, Kafka and Vertica, by enabling the replicated information to be transformed after it has been read from the data server to match the functionality or structure in the target server. For MySQL users the enhanced functionality and information provided by Tungsten Replicator (AMI) allows for global transaction IDs, advanced topology support such as multi-master, star, and fan-in, and enhanced latency identification, as well as filtering and transforming data in-flight​. Replicate from AWS Aurora, AWS RDS MySQL, MySQL, MariaDB & Percona Server from as little as $0.50/hour With Tungsten Replicator (AMI) on AWS, users can replicate GB’s of data from as little as 50c/hour: Go to the AWS Marketplace, and search for Tungsten, or click here Choose and Subscribe to the Tungsten Replicator for MySQL Source Extraction Choose and Subscribe to the target Tungsten Replicator AMI of your choice Pay by the hour When launched, the host will have all the prerequisites in place and a simple “wizard” runs on first launch and asks the user questions about the source and/or target and then configures it all for them Users can also start by trying one instance of the AMI for 14 days. There will be no hourly software charges for that instance, but AWS infrastructure charges still apply. Free Trials will automatically convert to a paid hourly subscription upon expiration. Tungsten Replicator OSS For those of you who might wonder: Tungsten Replicator OSS is, for all practical purposes, obsolete. ​While Tungsten Replicator OSS may still be available in various repositories, all OSS versions are outdated. We recommend you try out the new Tungsten Replicator (AMI); or contact us to find out more about the stand-alone, commercial product. We look forward to your feedback on the new Tungsten Replicator (AMI) – please comment below!

  • Database Tab Sweep
    I miss a proper database related newsletter for busy people. There’s so much happening in the space, from tech, to licensing, and even usage. Anyway, quick tab sweep. Paul Vallée (of Pythian fame) has been working on Tehama for sometime, and now he gets to do it full time as a PE firm, bought control of Pythian’s services business. Pythian has more than 350 employees, and 250 customers, and raised capital before. More at Ottawa’s Pythian spins out software platform Tehama. Database leaks data on most of Ecuador’s citizens, including 6.7 million children – ElasticSearch. Percona has launched Percona Distribution for PostgreSQL 11. This means they have servers for MySQL, MongoDB, and now PostgreSQL. Looks very much like a packaged server with tools from 3rd parties (source). Severalnines has launched Backup Ninja, an agent-based SaaS service to backup popular databases in the cloud. Backup.Ninja (cool URL) supports MySQL (and variants), MongoDB, PostgreSQL and TimeScale. No pricing available, but it is free for 30 days. Comparing Database Types: How Database Types Evolved to Meet Different Needs New In PostgreSQL 12: Generated Columns – anyone doing a comparison with MariaDB Server or MySQL? Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database – a huge deal as they migrated 75 petabytes of internal data to DynamoDB, Aurora, RDS and Redshift. Amazon, powered by AWS, and a big win for open source (a lot of these services are built-on open source). MongoDB and Alibaba Cloud Launch New Partnership – I see this as a win for the SSPL relicense. It is far too costly to maintain a drop-in compatible fork, in a single company (Hi Amazon DocumentDB!). Maybe if the PostgreSQL layer gets open sourced, there is a chance, but otherwise, all good news for Alibaba and MongoDB. MySQL 8.0.18 brings hash join, EXPLAIN ANALYZE, and more interestingly, HashiCorp Vault support for MySQL Keyring. (Percona has an open source variant).

  • Time in Performance Schema
    I've seen questions like this:"Is there a way to know when (date and time) the last statement captured in ... was actually ran?"more than once in some discussions (and customer issues) related to Performance Schema. I've seen answers provided by my colleagues and me after some limited testing. I've also noticed statements that it may not be possible.Indeed, examples of wall clock date and time in the output of queries from the performance_schema are rare (and usually come from tables in the information_schema.  sys.format_time() function converts time to a nice, human readable format, but it still remains relative - it is not a date and time when something recorded in performance_schema happened.In this post I'd like to document the answer I've seen and have in mind (and steps to get it) here, to save time for readers and myself faced with similar questions in the future. I'll also show the problem with this answer that I've noticed after testing for more than few minutes.Let's start with simple setup of testing environment. In my case it is good old MariaDB 10.3.7 running on this netbook under Windows. First, let's check if Performance Schema is enabled: MariaDB [test]> select version(), @@performance_schema;+--------------------+----------------------+| version()          | @@performance_schema |+--------------------+----------------------+| 10.3.7-MariaDB-log |                    1 |+--------------------+----------------------+1 row in set (0.233 sec)Then let's enable recording of time for everything and enable all consumers:MariaDB [test]> update performance_schema.setup_instruments set enabled='yes', timed='yes';Query OK, 459 rows affected (0.440 sec)Rows matched: 707  Changed: 459  Warnings: 0MariaDB [test]> update performance_schema.setup_consumers set enabled='yes';Query OK, 8 rows affected (0.027 sec)Rows matched: 12  Changed: 8  Warnings: 0Now we can expect recently executed statements to be recorded, like this: MariaDB [test]> select now(), event_id, timer_start, timer_end, sql_text from performance_schema.events_statements_current\G*************************** 1. row ***************************      now(): 2019-11-03 17:42:51   event_id: 46timer_start: 22468653162059216  timer_end: 22468697203533224   sql_text: select now(), event_id, timer_start, timer_end, sql_text from performance_schema.events_statements_current1 row in set (0.045 sec) Good, but how we can get a real time when the statement was executed (like now() reports)? We all know from the fine manual that timer_start and timer_end values are in "picoseconds". So we can easily convert them into seconds (or whatever units we prefer):MariaDB [test]> select now(), event_id, timer_start/1000000000000, sql_text from performance_schema.events_statements_current\G*************************** 1. row ***************************                    now(): 2019-11-03 17:54:02                 event_id: 69timer_start/1000000000000: 23138.8159                 sql_text: select now(), event_id, timer_start/1000000000000, sql_text from performance_schema.events_statements_current1 row in set (0.049 sec)This value is related startup time one might assume, and we indeed can expect that timer in Performance Schema is initialized at some very early stage of startup. But how to get date and time of server startup with SQL statement?This also seems to be easy, as we have a global status variable called Uptime measured in seconds. Depending on fork and version used we can get the value of Uptime either from the Performance Schema (in MySQL 5.7+) or from the Information Schema (in MariaDB and older MySQL versions). For example:MariaDB [test]> select variable_value from information_schema.global_status where variable_name = 'Uptime';+----------------+| variable_value |+----------------+| 23801          |+----------------+1 row in set (0.006 sec)So, server startup time is easy to get with a date_sub() function:MariaDB [test]> select @start := date_sub(now(), interval (select variable_value from information_schema.global_status where variable_name = 'Uptime') second) as start;+----------------------------+| start                      |+----------------------------+| 2019-11-03 11:28:18.000000 |+----------------------------+1 row in set (0.007 sec)In the error log of MariaDB server I see:2019-11-03 11:28:18 0 [Note] mysqld (mysqld 10.3.7-MariaDB-log) starting as process 5636 ...So, I am sure the result is correct. Now, if we use date_add() to add timer value converted to seconds, for example to the server startup time, we can get the desired answer, date and time when the statement recorded in performance_schema was really executed:MariaDB [test]> select event_id, @ts := date_add(@start, interval timer_start/1000000000000 second) as ts, sql_text, now(), timediff(now(), @ts) from performance_schema.events_statements_current\G*************************** 1. row ***************************            event_id: 657                  ts: 2019-11-03 18:24:00.501654            sql_text: select event_id, @ts := date_add(@start, interval timer_start/1000000000000 second) as ts, sql_text, now(), timediff(now(), @ts) from performance_schema.events_statements_current               now(): 2019-11-03 18:24:05timediff(now(), @ts): 00:00:04.4983461 row in set (0.002 sec)I was almost ready to publish this blog post a week ago, before paying more attention to the result (that used to be perfectly correct in earlier simple tests) and executing a variation of statement presented above. The problem I noticed is that when Uptime of the server is not just few minutes (as it often happens in quick test environments), but hours or days, timestamp that we get for a recent event from the performance_schema using the approach suggested may differ from current timestamp notably (we see 4.5+ seconds difference highlighted above). Moreover, this difference seem to fluctuate:MariaDB [test]> select event_id, @ts := date_add(@start, interval timer_start/1000000000000 second) as ts, sql_text, now(), timediff(now(), @ts) from performance_schema.events_statements_current\G*************************** 1. row ***************************            event_id: 682                  ts: 2019-11-03 18:24:01.877763            sql_text: select event_id, @ts := date_add(@start, interval timer_start/1000000000000 second) as ts, sql_text, now(), timediff(now(), @ts) from performance_schema.events_statements_current               now(): 2019-11-03 18:24:07timediff(now(), @ts): 00:00:05.1222371 row in set (0.002 sec)and tend to grow with Uptime. This make the entire idea of converting timer_start and timer_end Performance Schema "counters" in "picoseconds" questionable and unreliable for the precise real timestamp matching and comparing with other timestamp information sources in production. Same as with this photo of sunset at Brighton taken with my Nokia dumb phone back in June, I do not see a clear picture of time measurement in Performance Schema... After spending some more time thinking about this I decided to involve MySQL team somehow and created the feature request, Bug #97558 - "Add function (like sys.format_time) to convert TIMER_START to timestamp", that ended up quickly "Verified" (so I have small hope that I had not missed anything really obvious - correct me if I am wrong). I'd be happy to see further comments there and, surely, the function I asked about implemented. But I feel there is some internal problem with this and some new feature at server side may be needed to take the "drift" of time in Performance Schema into account.There is also a list of currently open questions that I may try to answer in followup posts: Is the problem of time drift I noticed a MariaDB 10.3.7-specific, or recent MySQL 5..x and 8.0.x are also affected? Is this difference in time growing monotonically with time or really fluctuating? When exactly Performance Schema time "counter" starts, where is it in the code? Are there any other, better or at least more precise and reliable ways to get timestamps of some specific events that happen during MySQL server work? I truly suspect that gdb and especially dynamic tracing on Linux with tools like bpftrace may give us more reliable results...

  • New dbForge Fusion tools with Visual Studio 2019 Support
    dbForge Fusion is a line of Visual Studio plugins designed to simplify database development and enhance data management capabilities. This line comprises three tools: dbForge Fusion for SQL Server, dbForge Fusion for MySQL, and dbForge Fusion for Oracle. We are happy to announce new updates for each of these tools which come with many improvements […] The post New dbForge Fusion tools with Visual Studio 2019 Support appeared first on blog.

  • InnoDB : Tablespace Space Management
    A user defined table and its corresponding  index data, in InnoDB, is  stored in files that have an extension .ibd. There are two types of tablespaces, general (or shared) tablespace and file-per-table.  For shared  tablespaces, data from many different  tables and their corresponding indexes may reside in a single .ibd file.… Facebook Twitter LinkedIn