int
(1).4個字節存儲,INT的長度是4個字節,存儲空間上比datatime少,int索引存儲空間也相對較小,排序和查詢效率相對較高一點點
(2)可讀性極差,無法直觀的看到數據,可能讓你很惱火
TIMESTAMP
(1)4個字節儲存
(2)值以UTC格式保存
(3)時區轉化 ,存儲時對當前的時區進行轉換,檢索時再轉換回當前的時區。
(4)TIMESTAMP值不能早於1970或晚於2037
datetime
(1)8個字節儲存
(2)與時區無關
(3)以'YYYY-MM-DD HH:MM:SS'格式檢索和顯示DATETIME值。支持的范圍為'1000-01-01 00:00:00'到'9999-12-31 23:59:59'
mysql也是這兩年才流行,性能越來越來,具體怎麼存儲看個人習慣和項目需求吧
分享兩篇關於int vs timestamp vs datetime性能測試的文章
Myisam:MySQL DATETIME vs TIMESTAMP vs INT 測試儀
CREATE TABLE `test_datetime` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`datetime` FIELDTYPE NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM;
機型配置
kip-locking
key_buffer = 128M
max_allowed_packet = 1M
table_cache = 512
sort_buffer_size = 2M
read_buffer_size = 2M
read_rnd_buffer_size = 8M
myisam_sort_buffer_size = 8M
thread_cache_size = 8
query_cache_type = 0
query_cache_size = 0
thread_concurrency = 4
測試
DATETIME 14111 14010 14369 130000000
TIMESTAMP 13888 13887 14122 90000000
INT 13270 12970 13496 90000000
執行mysql
mysql> select * from test_datetime into outfile ‘/tmp/test_datetime.sql’;
Query OK, 10000000 rows affected (6.19 sec)
mysql> select * from test_timestamp into outfile ‘/tmp/test_timestamp.sql’;
Query OK, 10000000 rows affected (8.75 sec)
mysql> select * from test_int into outfile ‘/tmp/test_int.sql’;
Query OK, 10000000 rows affected (4.29 sec)
alter table test_datetime rename test_int;
alter table test_int add column datetimeint INT NOT NULL;
update test_int set datetimeint = UNIX_TIMESTAMP(datetime);
alter table test_int drop column datetime;
alter table test_int change column datetimeint datetime int not null;
select * from test_int into outfile ‘/tmp/test_int2.sql’;
drop table test_int;
So now I have exactly the same timestamps from the DATETIME test, and it will be possible to reuse the originals for TIMESTAMP tests as well.
mysql> load data infile ‘/export/home/ntavares/test_datetime.sql’ into table test_datetime;
Query OK, 10000000 rows affected (41.52 sec)
Records: 10000000 Deleted: 0 Skipped: 0 Warnings: 0
mysql> load data infile ‘/export/home/ntavares/test_datetime.sql’ into table test_timestamp;
Query OK, 10000000 rows affected, 44 warnings (48.32 sec)
Records: 10000000 Deleted: 0 Skipped: 0 Warnings: 44
mysql> load data infile ‘/export/home/ntavares/test_int2.sql’ into table test_int;
Query OK, 10000000 rows affected (37.73 sec)
Records: 10000000 Deleted: 0 Skipped: 0 Warnings: 0
As expected, since INT is simply stored as is while the others have to be recalculated. Notice how TIMESTAMP still performs worse, even though uses half of DATETIME storage size.
Let’s check the performance of full table scan:
mysql> SELECT SQL_NO_CACHE count(id) FROM test_datetime WHERE datetime > ‘1970-01-01 01:30:00′ AND datetime < ‘1970-01-01 01:35:00′;
+———–+
| count(id) |
+———–+
| 211991 |
+———–+
1 row in set (3.93 sec)
mysql> SELECT SQL_NO_CACHE count(id) FROM test_timestamp WHERE datetime > ‘1970-01-01 01:30:00′ AND datetime < ‘1970-01-01 01:35:00′;
+———–+
| count(id) |
+———–+
| 211991 |
+———–+
1 row in set (9.87 sec)
mysql> SELECT SQL_NO_CACHE count(id) FROM test_int WHERE datetime > UNIX_TIMESTAMP(’1970-01-01 01:30:00′) AND datetime < UNIX_TIMESTAMP(’1970-01-01 01:35:00′);
+———–+
| count(id) |
+———–+
| 211991 |
+———–+
1 row in set (15.12 sec)
Then again, TIMESTAMP performs worse and the recalculations seemed to impact, so the next good thing to test seemed to be without those recalculations: find the equivalents of those UNIX_TIMESTAMP() values, and use them instead:
mysql> select UNIX_TIMESTAMP(’1970-01-01 01:30:00′) AS lower, UNIX_TIMESTAMP(’1970-01-01 01:35:00′) AS bigger;
+——-+——–+
| lower | bigger |
+——-+——–+
| 1800 | 2100 |
+——-+——–+
1 row in set (0.00 sec)
mysql> SELECT SQL_NO_CACHE count(id) FROM test_int WHERE datetime > 1800 AND datetime < 2100;
+———–+
| count(id) |
+———–+
| 211991 |
+———–+
1 row in set (1.94 sec)