It could just be different statistical definitions of failure. It is a good bet that they test it by taking a large batch of flash, then repeatedly cycling it, noting failures. Then they could make a graph with % of batch failed on the vertical axis, and cycles on the horizontal axis. Then you could check where the data crosses some percentage failure number, and check the horizontal access for the corresponding number of cycles. But what percentage? 0.1%? Lower?
Maybe there is a standard that the major flash manufacturers have ageed upon for defining failure in a test like this, but if there is, I do not know what it is. Even if Intel and Micron are using the same definition, I can imagine that they could get significantly different results by testing different batches of flash. Or perhaps Intel just has tougher standards for production testing and Intel fails more marginal die or wafers than Micron does. There are so many possibilities.