-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding another grok pattern to the filebeats mongo module pipeline. #7560
Conversation
Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually? |
Added. |
@cwray Thanks for the contribution. Could you share a few log lines from a mongodb log file to test against the new pattern? I initially wanted to ask you to add it to our existing test file but just realised it's missing :-( Will try to open a PR to add one so you can add it then. Could you add an entry to the changelog file? |
Do you mind rebasing your PR on top of mine? This contains the missing example logs: #7568 |
For the logs that don't match.
The log lines will work fine.
It might aslo be good to catch anyting with a COMMAND statement and parse them out diffrently To be able to look at what commands are running and how long they are taking. But that is for another day. |
Experiencing an issue because of this as well. Would love an update for this to make it in by default so I don't have to manage my own ingest pipelines |
FWIW, I was experiencing an issue related to this: https://discuss.elastic.co/t/mongodb-module-grok-expression-does-not-match/139241/4 However creating a pipeline manually with the grok patterns in this PR still did not fix my problem. The patterns pass when using the simulate pipeline, but not when using filebeat. |
What does your pipeline look like? Here is what is currently working for me.
You will need to add this pipeline, Then remove the current index and restart filebeats. Im not sure why I had to remove the current index to get this all to work. |
@cwray I copied the default filebeat pipeline, but just changed the grok patterns. I've totally purged ES of indexes related to filebeat and I removed the existing pipelines before adding the custom pipeline. When the documents get ingested into ES though, I get "Grok expressions do not match". Just so I'm clear though, what do you mean when you say "Remove the current index" |
@ruflin I would love to but I'm a bit new to this process. What is the best way to handle the rebase and push back to this branch? I tried and I think I screwed it up. |
packetbeat/procs/zsyscall_windows.go
Outdated
@@ -77,3 +78,22 @@ func _GetExtendedTcpTable(pTcpTable uintptr, pdwSize *uint32, bOrder bool, ulAf | |||
} | |||
return | |||
} | |||
|
|||
func _GetExtendedUdpTable(pTcpTable uintptr, pdwSize *uint32, bOrder bool, ulAf uint32, tableClass uint32, reserved uint32) (code syscall.Errno, err error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
func _GetExtendedUdpTable should be _GetExtendedUDPTable
func parameter pTcpTable should be pTCPTable
packetbeat/procs/zsyscall_windows.go
Outdated
@@ -57,6 +57,7 @@ var ( | |||
modiphlpapi = windows.NewLazySystemDLL("iphlpapi.dll") | |||
|
|||
procGetExtendedTcpTable = modiphlpapi.NewProc("GetExtendedTcpTable") | |||
procGetExtendedUdpTable = modiphlpapi.NewProc("GetExtendedUdpTable") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
var procGetExtendedUdpTable should be procGetExtendedUDPTable
packetbeat/procs/syscall_windows.go
Outdated
owningPID uint32 | ||
} | ||
|
||
type UDP6RowOwnerPID struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exported type UDP6RowOwnerPID should have comment or be unexported
packetbeat/procs/syscall_windows.go
Outdated
@@ -51,6 +52,19 @@ type TCP6RowOwnerPID struct { | |||
owningPID uint32 | |||
} | |||
|
|||
type UDPRowOwnerPID struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exported type UDPRowOwnerPID should have comment or be unexported
packetbeat/procs/syscall_windows.go
Outdated
@@ -24,6 +24,7 @@ import ( | |||
) | |||
|
|||
const ( | |||
UDP_TABLE_OWNER_PID = 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't use ALL_CAPS in Go names; use CamelCase
exported const UDP_TABLE_OWNER_PID should have comment (or a comment on this block) or be unexported
// specific language governing permissions and limitations | ||
// under the License. | ||
|
||
package index_recovery |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't use an underscore in package name
// specific language governing permissions and limitations | ||
// under the License. | ||
|
||
package add_kubernetes_metadata |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't use an underscore in package name
libbeat/common/tuples.go
Outdated
|
||
return tuple | ||
} | ||
|
||
func (t *TCPTuple) ComputeHashebles() { | ||
func (t *TCPTuple) ComputeHashables() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exported method TCPTuple.ComputeHashables should have comment or be unexported
libbeat/common/tuples.go
Outdated
|
||
return tuple | ||
} | ||
|
||
func (t *IPPortTuple) ComputeHashebles() { | ||
func (t *IPPortTuple) ComputeHashables() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exported method IPPortTuple.ComputeHashables should have comment or be unexported
@@ -66,7 +66,7 @@ const ( | |||
type matcher func(last, current []byte) bool | |||
|
|||
var ( | |||
sigMultilineTimeout = errors.New("multline timeout") | |||
sigMultilineTimeout = errors.New("multiline timeout") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
error var sigMultilineTimeout should have name of the form errFoo
Add elastic/beats to your remotes, if you don't have it already.
Rebase your branch.
Resolve conflicts.
Or if you can't add your patch run Force push it to your repo.
Also, it's a good idea to create a backup branch before start rebasing. If things get ugly, you can revert to your backup. |
f1c681f
to
f8687bd
Compare
f8687bd
to
069b5e7
Compare
Ok, I think I managed to unscrew my commit from yesterday and rebase. Let me know if I can do anything else. |
@cwray I'm still not able to get this working. I copied exactly what is in the commit and created a new pipeline. Deleted the indices I'm going to be writing to. Then restarted filebeat using the new pipeline and the grok expressions still don't match. Am I missing something obvious? The exact same pipeline passes when using the simulate API |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cwray Rebase looks good and all green.
What would be great now is if you could add one or multiple log lines which were not working before to this file here. https://github.com/elastic/beats/blob/master/filebeat/module/mongodb/log/test/mongodb-debian-3.2.11.log After that you can run GENERATE=1 INTEGRATION_TESTS=1 TESTING_FILEBEAT_MODULES=mongodb nosetests tests/system/test_modules.py
to update the generated content. You will need to have an Elasticsearch instance running on localhost:9200 for this to work.
If it does not work, could also do it for you if you give me access to your branch.
Could you also add a CHANGELOG entry?
@@ -4,7 +4,9 @@ | |||
"grok": { | |||
"field": "message", | |||
"patterns":[ | |||
"%{TIMESTAMP_ISO8601:mongodb.log.timestamp} %{WORD:mongodb.log.severity} %{WORD:mongodb.log.component} \\s*\\[%{WORD:mongodb.log.context}\\] %{GREEDYDATA:mongodb.log.message}" | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you remove this empty line?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure I will give all this build stuff a go if I can get a chance today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ruflin I tried to give this a go having a few problems. I have given you access to my repo If you would like to give it a try.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the error you are getting?
You need to put you example logs in filebeat/module/mongodb/log/test/mongodb-debian-3.2.11.log
. Before running the generator command, you need to build filebeat using make filebeat.test
and then run GENERATE=1 INTEGRATION_TESTS=1 TESTING_FILEBEAT_MODULES=mongodb nosetests tests/system/test_modules.py
.
FWIW I'm still having this issue. Can't get our mongo logs into ES because of it |
Sorry, this has not been revisited in a while. I have been pulled off this project at my company and resigned to a different task. |
This problem is already fixed with the latest filebeat mongodb grok pattern.
Output expected json is:
Will close this PR now. |
Adding another grok pattern to the filebeats mongo module, ingest pipeline.
We have some extended logging turned on in mongo. The log file format that mongo creates for these extended logs is not parseable by the current grok filter. Adding another grok statement to the list of Grok statements to handle these extended mongo logs.
see.
https://discuss.elastic.co/t/filebeat-mongo-module-grok-pattern-is-incorrect/138971/4