-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot produce big messages #181
Comments
Hey @turtledev1, xk6-kafka/scripts/test_json.js Lines 43 to 51 in 9b2a826
Also, you need to import CODEC_ZSTD and provide it too in place of "zstd" .
Let me know if you aren't creating the topic in the test, so I can dig deeper into this. |
My topic already exists so I'm not creating it. Also, the zstd compression was my first attempt at fixing this because I thought the messages were too big. I have the same issue without the compression. I know my setup works because if I generate a smaller message, everything works as expected and my consumer coreectly receives the message. |
|
Let's forget about the compression, I have the same result without it. Here is my complete code: // test.ts
import { Connection, SchemaRegistry, SCHEMA_TYPE_AVRO, Writer } from 'k6/x/kafka';
import { config } from './config/config';
import { generateData } from './data';
const brokers = config.kafka.brokerList;
const schemaRegistry = new SchemaRegistry({
url: config.kafka.schemaRegistryUrl
});
const topic = config.kafka.topic;
const writer = new Writer({
brokers: brokers,
topic: topic,
autoCreateTopic: true,
});
const connection = new Connection({
address: brokers[0],
});
const valueSchemaObject = schemaRegistry.getSchema({
subject: `${topic}-value`,
schemaType: SCHEMA_TYPE_AVRO,
});
export function teardown() {
writer.close();
connection.close();
}
export default function () {
for (let index = 0; index < 1; index++) {
const messages = [
{
value: schemaRegistry.serialize({
data: generateData(50, 50, 20),
schema: valueSchemaObject,
schemaType: SCHEMA_TYPE_AVRO,
}),
},
];
writer.produce({ messages: messages });
}
} // data.ts
export function generateData(sectionNumber: number, rowNumber: number, seatNumber: number) {
let seatRef = 1;
const sections = [];
for (let sectionId = 0; sectionId < sectionNumber; sectionId++) {
const sectionName = `section${sectionId}`;
const rows = [];
for (let rowId = 0; rowId < rowNumber; rowId++) {
const rowName = `row${rowId}`;
const seats = [];
for (let seatId = 0; seatId < seatNumber; seatId++) {
seats.push({
name: `seat${seatId}`,
group: {
'string': sectionName,
},
ref: `seatRef${seatRef}`,
});
seatRef++;
}
rows.push({
name: rowName,
seats: seats,
});
}
sections.push({
name: sectionName,
rows: rows,
});
}
return {
id: '11ead359-db45-dbaa-9137-3b6a288b93c6',
sections: sections
};
} And here is the schema in the registry: {
"fields": [
{
"name": "id",
"type": {
"logicalType": "uuid",
"type": "string"
}
},
{
"default": [],
"name": "sections",
"type": {
"items": {
"fields": [
{
"name": "name",
"type": {
"avro.java.string": "String",
"type": "string"
}
},
{
"default": [],
"name": "rows",
"type": {
"items": {
"fields": [
{
"name": "name",
"type": {
"avro.java.string": "String",
"type": "string"
}
},
{
"default": [],
"name": "seats",
"type": {
"items": {
"fields": [
{
"name": "name",
"type": {
"avro.java.string": "String",
"type": "string"
}
},
{
"name": "ref",
"type": {
"avro.java.string": "String",
"type": "string"
}
},
{
"default": null,
"name": "group",
"type": [
"null",
{
"avro.java.string": "String",
"type": "string"
}
]
}
],
"name": "Seat",
"type": "record"
},
"type": "array"
}
}
],
"name": "Row",
"type": "record"
},
"type": "array"
}
}
],
"name": "Section",
"type": "record"
},
"type": "array"
}
}
],
"name": "Manifest",
"namespace": "venue.avro.manifest",
"type": "record"
} With |
I slightly modified your script and found out that you're producing Kafka message that are bigger than the Kafka's 1 MB limit. As you can see below, when calling Scriptimport { SchemaRegistry, SCHEMA_TYPE_AVRO } from "k6/x/kafka";
const schemaRegistry = new SchemaRegistry();
const valueSchemaObject = JSON.stringify({
type: "record",
name: "Value",
namespace: "dev.mostafa.xk6.kafka",
fields: [
{
name: "id",
type: {
logicalType: "uuid",
type: "string",
},
},
{
default: [],
name: "sections",
type: {
items: {
fields: [
{
name: "name",
type: {
"avro.java.string": "String",
type: "string",
},
},
{
default: [],
name: "rows",
type: {
items: {
fields: [
{
name: "name",
type: {
"avro.java.string": "String",
type: "string",
},
},
{
default: [],
name: "seats",
type: {
items: {
fields: [
{
name: "name",
type: {
"avro.java.string": "String",
type: "string",
},
},
{
name: "ref",
type: {
"avro.java.string": "String",
type: "string",
},
},
{
default: null,
name: "group",
type: [
"null",
{
"avro.java.string": "String",
type: "string",
},
],
},
],
name: "Seat",
type: "record",
},
type: "array",
},
},
],
name: "Row",
type: "record",
},
type: "array",
},
},
],
name: "Section",
type: "record",
},
type: "array",
},
},
],
name: "Manifest",
namespace: "venue.avro.manifest",
type: "record",
});
// https://stackoverflow.com/a/71209062/6999563
function bytesForHuman(bytes, decimals = 2) {
let units = ["B", "KB", "MB", "GB", "TB", "PB"];
let i = 0;
for (i; bytes > 1024; i++) {
bytes /= 1024;
}
return parseFloat(bytes.toFixed(decimals)) + " " + units[i];
}
function generateData(sectionNumber, rowNumber, seatNumber) {
let seatRef = 1;
const sections = [];
for (let sectionId = 0; sectionId < sectionNumber; sectionId++) {
const sectionName = `section${sectionId}`;
const rows = [];
for (let rowId = 0; rowId < rowNumber; rowId++) {
const rowName = `row${rowId}`;
const seats = [];
for (let seatId = 0; seatId < seatNumber; seatId++) {
seats.push({
name: `seat${seatId}`,
group: {
string: sectionName,
},
ref: `seatRef${seatRef}`,
});
seatRef++;
}
rows.push({
name: rowName,
seats: seats,
});
}
sections.push({
name: sectionName,
rows: rows,
});
}
return {
id: "11ead359-db45-dbaa-9137-3b6a288b93c6",
sections: sections,
};
}
export default function () {
const smallMsg = schemaRegistry.serialize({
data: generateData(50, 50, 10),
schema: { schema: valueSchemaObject },
schemaType: SCHEMA_TYPE_AVRO,
});
const bigMsg = schemaRegistry.serialize({
data: generateData(50, 50, 20),
schema: { schema: valueSchemaObject },
schemaType: SCHEMA_TYPE_AVRO,
});
console.log("Big message size: ", bigMsg.length, " bytes");
console.log("Big message size (humanized): ", bytesForHuman(bigMsg.length));
console.log("Small message size: ", smallMsg.length, " bytes");
console.log("Small message size (humanized): ", bytesForHuman(smallMsg.length));
// https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html
console.log("message.max.bytes: ", bytesForHuman(1048588));
} Console output$ ./k6 run test.ts
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: test.ts
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)
INFO[0000] Big message size: 1524028 bytes source=console
INFO[0000] Big message size (humanized): 1.45 MB source=console
INFO[0000] Small message size: 754028 bytes source=console
INFO[0000] Small message size (humanized): 736.36 KB source=console
INFO[0000] message.max.bytes: 1 MB source=console
running (00m00.6s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 00m00.6s/10m0s 1/1 iters, 1 per VU
data_received........: 0 B 0 B/s
data_sent............: 0 B 0 B/s
iteration_duration...: avg=600.34ms min=600.34ms med=600.34ms max=600.34ms p(90)=600.34ms p(95)=600.34ms
iterations...........: 1 1.662717/s I consider this ticket closed. |
Sorry to readd to this, but even with compression, I have the same error, even though the message is way smaller. There also seem to be a misconception on your side that the compression type needs to be on the topic level for it to work, but it's not true if the topic has the 'producer' compression type, which our has: (see https://kafka.apache.org/documentation/#topicconfigs_compression.type).
Also as a proof of this, we currently produce and consume messages that are at least twice the size as the messages that fail in my example. Maybe the compression is not used correctly in the writer? |
|
@turtledev1 Also, the message you received in the terminal is |
Omg thank you so much! I confirm that this works! Really appreciate the time you took to help me! |
I'm trying to produce a big kafka message in my tests, but I get a really weird error
Note: I didn't pasted the whole error because it's too big, but it's basically a bunch of numbers like these.
Here is my code
The text was updated successfully, but these errors were encountered: