MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns#4914
MDEV-39261 MariaDB crash on startup in presence of indexed virtual columns#4914Thirunarayanan wants to merge 1 commit into10.6from
Conversation
|
|
dr-m
left a comment
There was a problem hiding this comment.
Thank you, great work in reproducing the error. I found some ways to simplify this logic further. I think that we should invoke open_purge_table() for each dict_table_t::id only once per purge batch, in trx_purge_table_acquire(), provided that indexed virtual columns exist. In that way, when we are purging individual undo log records, we are guaranteed to have a TABLE* handle available.
60d1668 to
227427a
Compare
storage/innobase/include/trx0purge.h
Outdated
| trx_purge() after all workers complete. */ | ||
| THD* coordinator_thd= nullptr; |
There was a problem hiding this comment.
This is mixing TAB and space indentation.
I wonder if we truly need this field. Can we get the necessary information via MDL_ticket::get_ctx()?
There was a problem hiding this comment.
When innodb_purge_threads=1, the coordinator acts as both coordinator and
worker, calling srv_purge_worker_task_low() directly. In this case, it
processes purge_node_t entries and would normally call
innobase_reset_background_thd() in purge_node_t::end(), which would
prematurely close all open TABLE* objects via close_thread_tables().
This would break the batch processing since tables need to remain open
until all purge_node_t entries are processed.
Worker threads (when innodb_purge_threads > 1) have their own THD
lifecycle managed by acquire_thd()/release_thd() and must call
innobase_reset_background_thd() in purge_node_t::end() to clean up
their thread-local resources.
By comparing the current THD against coordinator_thd, purge_node_t::end()
can skip the premature cleanup for the coordinator while still allowing
workers to properly clean up their resources. The coordinator's cleanup
happens centrally in trx_purge() after all nodes complete
| if (thd != purge_sys.coordinator_thd) | ||
| innobase_reset_background_thd(thd); |
There was a problem hiding this comment.
Do we really need this call here, or could we avoid adding purge_sys.coordinator_thd and invoke this only in trx_purge_close_tables()? Note: The comment of that function incorrectly hints that it could be called by purge workers. That needs to be corrected; it is only called by the purge coordinator task.
There was a problem hiding this comment.
Yes, we need this here. mentioned the reason in the above comment.
| if (!table) | ||
| return nullptr; | ||
| /* At this point, the freshly loaded table may already have been evicted. | ||
| We must look it up again while holding a shared dict_sys.latch. We keep | ||
| trying this until the table is found in the cache or it cannot be found | ||
| in the dictionary (because the table has been dropped or rebuilt). */ |
There was a problem hiding this comment.
It is a little hard to review this change, because some code was shuffled around. Can you make sure that a minimal diff will be displayed for the function trx_purge_table_open()? I can see that we removed the comment and added a dereferencing of table after it. I will post a separate comment on the moved snippet where this comment had been omitted.
There was a problem hiding this comment.
I realized that the comment is at the end of a for (;;) loop body. So, the next iteration of the loop should look up the freshly loaded table.
| dict_sys.lock(SRW_LOCK_CALL); | ||
| table= dict_load_table_on_id(table_id, DICT_ERR_IGNORE_FK_NOKEY); | ||
| dict_sys.unlock(); | ||
| if (!table) | ||
| return nullptr; | ||
| } | ||
|
|
||
| if (!table->is_readable() || table->corrupted) |
There was a problem hiding this comment.
In this function trx_purge_table_open(), which the patch is moving elsewhere, so that all of the code will be indicated as new here, there used to be a comment after the !table check, saying that the freshly loaded table could already be evicted at that point, and therefore it is not safe to access the table without looking it up first. Now the comment is gone, and we are dereferencing table in an unsafe way.
Can dict_load_table_on_id() ever return an unreadable or corrupted table, or would it return nullptr in that case? I wonder if the condition could be relocated.
However, for the rest of the function, the same problem exists. Apparently, the subsequent code is assuming that we are holding dict_sys.freeze(). That is not the case if we had invoked dict_load_table_on_id().
There was a problem hiding this comment.
I realized that the table would be latched in the next iteration of this for (;;) loop. So, there is no correctness issue. Only the comment was being removed, making it harder to understand the logic.
227427a to
c69ba83
Compare
c69ba83 to
11a7390
Compare
…lumns Problem: ======== A single InnoDB purge worker thread can process undo logs from different tables within the same batch. But get_purge_table(), open_purge_table() incorrectly assumes that a 1:1 relationship between a purge worker thread and a table within a single batch. Based on this wrong assumtion, InnoDB attempts to reuse TABLE objects cached in thd->open_tables for virtual column computation. 1) Purge worker opens Table A and caches the TABLE pointer in thd->open_tables. 2) Same purge worker moves to Table B in the same batch, get_purge_table() retrieves the cached pointer for Table A instead of opening Table B. 3) Because innobase::open() is ignored for Table B, the virtual column template is never initialized. 4) virtual column computation for Table B aborts the server Solution: ======== The purge coordinator thread now opens both InnoDB dict_table_t* and MariaDB TABLE* handles during batch preparation in trx_purge_attach_undo_recs(). purge_node_t::tables now stores only std::pair<dict_table_t*, TABLE*>. When worker thread needs TABLE* for virtual column computation, it fetches the TABLE* from purge_node_t->tables. trx_purge_table_open(): Modified to open TABLE* using the coordinator's MDL ticket open_purge_table(): Accept and use the MDL ticket from coordinator thread purge_sys_t::coordinator_thd: To track the coordinator thread in purge subsystem purge_node_t::end(): To prevent premature table closure when the coordinator acts as a worker and skips innobase_reset_background_thd() purge_sys_t::close_and_reopen(): Properly handles retry logic by clearing all table state and reopening the table. Uses mdl_map for maintaining mdl_tickets for each table id trx_purge_close_tables(): Now accepts mdl_map parameter to release MDL tickets from the coordinator's map after closing tables trx_purge(): MDL tickets are now stored in a local mdl_map instead of purge_node_t. Closes the coordinator's TABLE* objects after all workers are completed Declared open_purge_table() and close_thread_tables() in trx0purge.cc
11a7390 to
3ef5939
Compare
dr-m
left a comment
There was a problem hiding this comment.
Please include the following changes in your next iteration:
diff --git a/storage/innobase/handler/ha_innodb.cc b/storage/innobase/handler/ha_innodb.cc
index f01ff8f3dcb..7f95cbbdaea 100644
--- a/storage/innobase/handler/ha_innodb.cc
+++ b/storage/innobase/handler/ha_innodb.cc
@@ -5855,6 +5855,9 @@ static void initialize_auto_increment(dict_table_t *table, const Field& field,
@retval 0 on success */
int
ha_innobase::open(const char* name, int, uint)
+{ return open(name, false); }
+
+int ha_innobase::open(const char *name, bool for_vc_purge)
{
char norm_name[FN_REFLEN];
@@ -6075,8 +6078,6 @@ ha_innobase::open(const char* name, int, uint)
/* Index block size in InnoDB: used by MySQL in query optimization */
stats.block_size = static_cast<uint>(srv_page_size);
- const my_bool for_vc_purge = THDVAR(thd, background_thread);
-
if (for_vc_purge || !m_prebuilt->table
|| m_prebuilt->table->is_temporary()
|| m_prebuilt->table->persistent_autoinc
@@ -19921,7 +19922,6 @@ static struct st_mysql_sys_var* innobase_system_variables[]= {
MYSQL_SYSVAR(default_encryption_key_id),
MYSQL_SYSVAR(immediate_scrub_data_uncompressed),
MYSQL_SYSVAR(buf_dump_status_frequency),
- MYSQL_SYSVAR(background_thread),
MYSQL_SYSVAR(encrypt_temporary_tables),
NULLThe new function ha_innobase::open(name, true) would only be invoked from open_purge_table(). I wonder what would happen if the table is partitioned. It could be simpler to introduce an uint test_if_locked flag that would indicate that we want to skip the initialize_auto_increment() and info() calls. In 10.6, the last used flag is
#define HA_OPEN_GLOBAL_TMP_TABLE (1U << 14) /* TMP table used by repliction */In main the last one seems to be the following:
#define HA_OPEN_DATA_READONLY (1U << 17) /* Use readonly for data */My revised idea is that open_purge_table() would be the only place to set a new flag HA_OPEN_FOR_INNODB_PURGE, which would only be checked by ha_innobase::open(const char*,int,uint).
| /** map of table identifiers to table handles and meta-data locks */ | ||
| std::unordered_map<table_id_t, std::pair<dict_table_t*,MDL_ticket*>> tables; | ||
| /** map of table identifiers to table handles and TABLE* object, | ||
| which is set by purge co-ordinator thread */ | ||
| std::unordered_map<table_id_t, std::pair<dict_table_t*, TABLE*>> tables; |
There was a problem hiding this comment.
If I remember correctly, opening a TABLE instead of only acquiring MDL_ticket can impose a significant overhead. I would make the element something like this:
class purge_table
{
union {
/** TABLE* with the least signficant bit set */
uintptr_t mariadb_table;
/** metadata lock when !get_mariadb_table() */
MDL_ticket *ticket;
};
public:
dict_table_t *table;
TABLE *get_mariadb_table() const noexcept
{
return UNIV_UNLIKELY(mariadb_table & 1)
? reinterpret_cast<TABLE*>(mariadb_table & ~1U) : nullptr;
}
MDL_ticket *get_ticket() const noexcept
{ return UNIV_LIKELY(!(mariadb_table & 1)) ? ticket : nullptr; }
};That is, make the least significant bit of the pointer a flag which indicates that we’ve opened a table handle rather than only acquired an MDL. We can safely reuse the least significant bit because the typical alignment of the pointers returned by malloc() will be at least sizeof(size_t).
| /* Open MariaDB TABLE for tables with indexed virtual columns */ | ||
| if (*mdl && table->has_virtual_index()) |
There was a problem hiding this comment.
This condition at the end of trx_purge_table_open() is what we actually have to add. GitHub is displaying it in two parts, because you had merged trx_purge_table_acquire() into that function. I will quote it in full to make my comment more readable:
/* Open MariaDB TABLE for tables with indexed virtual columns */
if (*mdl && table->has_virtual_index())
{
*maria_table= open_purge_table(current_thd, db_buf, db_len, tbl_buf,
tbl_len, *mdl);
if (*maria_table && table->vc_templ)
table->vc_templ->mysql_table= *maria_table;
}
return table;
}There is a mismatch of concepts, but it existed before this change. table is InnoDB table metadata dict_table_t, while maria_table is includes a pointer to maria_table->s, which is the MariaDB table metadata TABLE_SHARE. As far as I can tell, the dict_table_t::vc_templ will continue to exist until the table is removed from the cache or some table-rebuilding DDL is executed. Also, unless I missed something, the table->vc_templ->mysql_table that we are assigning would remain cached for the remaining lifetime of table->vc_templ (until the table is evicted or rebuilt).
Is it possible to access table->vc_templ->mysql_table from multiple threads? What is the lifetime of that field? What happens to it after the table handle is closed? How would the build fail if we removed that field?
diff --git a/storage/innobase/include/dict0mem.h b/storage/innobase/include/dict0mem.h
index 0ab4e4d1fda..29dbb1185af 100644
--- a/storage/innobase/include/dict0mem.h
+++ b/storage/innobase/include/dict0mem.h
@@ -1865,9 +1865,6 @@ struct dict_vcol_templ_t {
/** default column value if any */
byte* default_rec;
- /** cached MySQL TABLE object */
- TABLE* mysql_table;
-
/** when mysql_table was cached */
uint64_t mysql_table_query_id;
The build would fail for the assignment here as well as for the read and write in innodb_find_table_for_vc(). The mysql_table_query_id is what attempts to protect us from unsafe access.
There appears to be a race condition in innodb_find_table_for_vc(). I think that we must protect the contents of table->vc_templ with table->lock_mutex_lock() in both places to avoid a race between reading and assigning the fields.
Problem:
A single InnoDB purge worker thread can process undo logs from different tables within the same batch. But get_purge_table(), open_purge_table() incorrectly assumes that a 1:1 relationship between a purge worker thread and a table within a single batch. Based on this wrong assumtion, InnoDB attempts to reuse TABLE objects cached in thd->open_tables for virtual column computation.
Solution:
get_purge_table(): Accept the specific db_name and table_name associated with the current undo log record. Compare the db_name and table_name against the existing cached TABLE objects. If it is match then return the cached table object.