ich habe Probleme mit dem JDBC Persistenzdienst und benötige eure Hilfe.
Folgender Aufbau:
Standort A: Raspberry PI 3B mit openHABian 3.3.0, frisch aufgesetzt
Standort B: Synology NAS mit MariaDB10 und DynDNS
Testweise läuft auf dem Raspberry eine Regel, die ein Number Item alle 5 Sekunden um eins inkrementiert. Dieses Item soll wiederum bei jeder Änderung in die Datenbank hochgeladen werden.
Auch "Systeminfo_Arbeitsspeicher_Belegt" (aus dem Systeminfo Binding) soll bei jeder Änderung eingetragen werden.
Dies hat auch schon funktioniert, aber es führte immer wieder zu Timeouts und nun führt es nur noch zu Timeouts und es wird nichts mehr in der Datenbank eingetragen.
Neustarts des Raspberrys beheben das Problem nicht.
Auf Standort A läuft folgende jdbc.cfg:
Code: Alles auswählen
url=jdbc:mariadb://<dyndns>:<port>/<datenbank>?serverTimezone=Europe/Berlin
# required database user
user=<user>
# required database password
password=<passwort>
errReconnectThreshold=100
sqltype.NUMBER = DOUBLE
sqltype.STRING = VARCHAR(65500)
tableNamePrefix=Item
tableIdDigitCount=0
rebuildTableNames=false
enableLogTime=false
Folgende Events schmeißt das Logbuch aus:
Code: Alles auswählen
2022-11-18 19:21:26.950 [INFO ] [.core.internal.i18n.I18nProviderImpl] - Locale set to 'de_DE'.
2022-11-18 19:21:56.737 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'jdbc.persist'
2022-11-18 19:22:13.532 [INFO ] [.core.model.lsp.internal.ModelServer] - Started Language Server Protocol (LSP) service on port 5007
2022-11-18 19:22:14.824 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'dbstress.rules'
2022-11-18 19:22:19.768 [INFO ] [el.core.internal.ModelRepositoryImpl] - Loading model 'test.rules'
2022-11-18 19:22:29.077 [INFO ] [e.automation.internal.RuleEngineImpl] - Rule engine started.
2022-11-18 19:22:29.773 [INFO ] [org.openhab.ui.internal.UIService ] - Started UI on port 8080
2022-11-18 19:22:33.085 [INFO ] [ab.ui.habpanel.internal.HABPanelTile] - Started HABPanel at /habpanel
2022-11-18 19:22:35.146 [INFO ] [persistence.jdbc.internal.JdbcMapper] - JDBC::openConnection: Driver is available::Yank setupDataSource
2022-11-18 19:22:45.684 [WARN ] [persistence.jdbc.internal.JdbcMapper] - JDBC::openConnection: failed to open connection: Failed to initialize pool: Could not read resultset: Read timed out
Query is : SELECT @@tx_isolation
2022-11-18 19:22:45.689 [INFO ] [persistence.jdbc.internal.JdbcMapper] - JDBC::openConnection: Driver is available::Yank setupDataSource
2022-11-18 19:24:56.224 [WARN ] [persistence.jdbc.internal.JdbcMapper] - JDBC::openConnection: failed to open connection: Failed to initialize pool: Could not connect to address=(host=<dyndns>)(port=<port>)(type=master) : Die Wartezeit für die Verbindung ist abgelaufen (Connection timed out)
2022-11-18 19:24:56.511 [INFO ] [persistence.jdbc.internal.JdbcMapper] - JDBC::openConnection: Driver is available::Yank setupDataSource
2022-11-18 19:27:09.342 [WARN ] [persistence.jdbc.internal.JdbcMapper] - JDBC::openConnection: failed to open connection: Failed to initialize pool: Could not connect to address=(host=<dyndns>)(port=<port>)(type=master) : Die Wartezeit für die Verbindung ist abgelaufen (Connection timed out)
2022-11-18 19:27:09.346 [INFO ] [persistence.jdbc.internal.JdbcMapper] - JDBC::openConnection: Driver is available::Yank setupDataSource
2022-11-18 19:27:41.764 [WARN ] [jdbc.internal.JdbcPersistenceService] - JDBC::store: No connection to database. Cannot persist state '398' for item 'Systeminfo_Belegt (Type=NumberItem, State=399, Label=Systeminfo_Arbeitsspeicher_Belegt, Category=, Tags=[Point])'! Will retry connecting to database when error count:0 equals errReconnectThreshold:10
2022-11-18 19:27:41.804 [WARN ] [ore.internal.scheduler.SchedulerImpl] - Scheduled job '<unknown>' failed and stopped
java.lang.ClassCastException: class java.lang.Integer cannot be cast to class java.lang.Long (java.lang.Integer and java.lang.Long are in module java.base of loader 'bootstrap')
at org.openhab.persistence.jdbc.db.JdbcMariadbDAO.doPingDB(JdbcMariadbDAO.java:92) ~[?:?]
at org.openhab.persistence.jdbc.internal.JdbcMapper.pingDB(JdbcMapper.java:78) ~[?:?]
at org.openhab.persistence.jdbc.internal.JdbcMapper.checkDBAccessability(JdbcMapper.java:248) ~[?:?]
at org.openhab.persistence.jdbc.internal.JdbcPersistenceService.internalStore(JdbcPersistenceService.java:149) ~[?:?]
at org.openhab.persistence.jdbc.internal.JdbcPersistenceService.store(JdbcPersistenceService.java:135) ~[?:?]
at org.openhab.core.persistence.internal.PersistItemsJob.run(PersistItemsJob.java:60) ~[?:?]
at org.openhab.core.internal.scheduler.CronSchedulerImpl.lambda$0(CronSchedulerImpl.java:62) ~[?:?]
at org.openhab.core.internal.scheduler.CronSchedulerImpl.lambda$1(CronSchedulerImpl.java:69) ~[?:?]
at org.openhab.core.internal.scheduler.SchedulerImpl.lambda$12(SchedulerImpl.java:191) ~[?:?]
at org.openhab.core.internal.scheduler.SchedulerImpl.lambda$1(SchedulerImpl.java:88) ~[?:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Hat einer eine Idee?
Schonmal Danke!
Viele Grüße
Jobst