-
-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Invalid row data #25
Comments
Thanks for reporting this! This is for the one that's going to tackle this issue: I don't think using |
I've been trying to think what you said about the read_fully, but are you sure that's correct in this case? If I'm not mistaken the Perhaps you mean that the I would really like to know if the solution I proposed is the correct one so I can submit a pull request and close this issue and safely use the crystal-mysql shard |
@bcardiff is anyone working on this, or can someone give me some direction on what is required to get a pull request accepted. As always I'm happy to do the work :-) |
@benoist could you confirm that the caller to that After a bit of review I think that is the only call that may cause it. How many columns did you need to experience this issue? Because the |
I indeed traced it back to that call with the |
Fixed on master now. I was unable to reproduce the issue though. I tried the following spec (which I didn't commit) that inserts 2 million records of 1000 columns with just nulls. it "gets many columns from table" do
with_test_db do |db|
columns = 1000
row = 2_000_000
db.exec "create table table1 (#{(1..columns).map { |i| "c#{i} int null" }.join(", ")})"
row.times do
db.exec %(insert into table1 (c1) values (NULL))
end
db.query "select * from table1" do |rs|
row.times do
rs.move_next.should be_true
columns.times do
rs.read.should be_nil
end
end
rs.move_next.should be_false
end
end
end Even with that I was unable to repro, but it's true that |
Great thanks!! |
Hi,
I was having some problems with query results when using specific data.
When reading a table with 1.3 million records it was retrieving all the information correctly when using a limited amount of data per row. When the data per row was larger, it returned the wrong results.
It was able to trace it back to the Packet reader
Currently is says:
When I change the @io.read(slice) to @io.read_fully(slice) like below, my query run correctly.
I'm not sure if the fix is the correct fix.
The text was updated successfully, but these errors were encountered: