我已经在我的spring Boot 应用程序中配置了两个数据库作为主数据库和从数据库。每当应用程序从从数据库查询任何内容时,hikari池会自动关闭,并抛出如下错误
o.h.engine.jdbc.spi.SqlExceptionHelper (137) |> SQL Error: 0, SQLState: null
[ http-nio-2100-exec-6 ] |ERROR| o.h.engine.jdbc.spi.SqlExceptionHelper (142) |> HikariDataSource HikariDataSource (REPORT_POOL) has been closed.
[ http-nio-2100-exec-6 ] |ERROR| c.n.c.exception.GlobalExceptionHandler (485) |> Critical Error:
org.springframework.orm.jpa.JpaSystemException: Unable to acquire JDBC Connection; nested exception is org.hibernate.exception.GenericJDBCException: Unable to acquire JDBC Connection
我对db的配置如下:
#Master Database
master.datasource.hikari.url=jdbc:postgresql://localhost:5432/master_db
master.datasource.hikari.username=*****
master.datasource.hikari.password=*****
master.datasource.hikari.minimum-idle=10
master.datasource.hikari.maximum-pool-size=30
master.datasource.hikari.connection-timeout=30000
master.datasource.hikari.leakDetection=15000
master.datasource.hikari.idle-timeout=25000
master.datasource.hikari.max-lifetime=600000
master.datasource.hikari.auto-commit=true
#Slave Database Read
report.datasource.hikari.url=jdbc:postgresql://localhost:5432/slave_db
report.datasource.hikari.username=*****
report.datasource.hikari.password=*****
report.datasource.hikari.minimum-idle=10
report.datasource.hikari.maximum-pool-size=30
report.datasource.hikari.connection-timeout=30000
report.datasource.hikari.leakDetection=15000
report.datasource.hikari.idle-timeout=20000
report.datasource.hikari.max-lifetime=600000
report.datasource.hikari.auto-commit=true
我将两个数据源配置为
@Bean
@Primary
@Autowired
public DataSource dataSource() {
DataSourceRouting routingDataSource = new DataSourceRouting();
routingDataSource.initDatasource(masterDataSource(), readDataSource());
return routingDataSource;
}
private DataSource masterDataSource() {
HikariConfig master = new HikariConfig();
master.setPoolName("MASTER_POOL");
master.setJdbcUrl(env.getProperty(String.format("%s.url", MASTER)));
master.setUsername((env.getProperty(String.format("%s.username", MASTER))));
master.setPassword((env.getProperty(String.format("%s.password", MASTER))));
master.setMinimumIdle(Integer.valueOf(env.getProperty(String.format("%s.minimum-idle", MASTER))));
master.setMaximumPoolSize(Integer.valueOf(env.getProperty(String.format("%s.maximum-pool-size", MASTER))));
master.setConnectionTimeout(Long.valueOf(env.getProperty(String.format("%s.connection-timeout", MASTER))));
master.setIdleTimeout(Long.valueOf(env.getProperty(String.format("%s.idle-timeout", MASTER))));
master.setMaxLifetime(Long.valueOf(env.getProperty(String.format("%s.max-lifetime", MASTER))));
master.setLeakDetectionThreshold(Long.valueOf(env.getProperty(String.format("%s.leakDetection", MASTER))));
master.setAutoCommit(Boolean.valueOf(env.getProperty(String.format("%s.auto-commit", MASTER))));
masterDataSource = new HikariDataSource(master);
return masterDataSource;
}
private DataSource readDataSource() {
HikariConfig report = new HikariConfig();
report.setPoolName("REPORT_POOL");
report.setJdbcUrl(env.getProperty(String.format("%s.url", REPORT)));
report.setUsername((env.getProperty(String.format("%s.username", REPORT))));
report.setPassword((env.getProperty(String.format("%s.password", REPORT))));
report.setMinimumIdle(Integer.valueOf(env.getProperty(String.format("%s.minimum-idle", REPORT))));
report.setMaximumPoolSize(Integer.valueOf(env.getProperty(String.format("%s.maximum-pool-size", REPORT))));
report.setConnectionTimeout(Long.valueOf(env.getProperty(String.format("%s.connection-timeout", REPORT))));
report.setIdleTimeout(Long.valueOf(env.getProperty(String.format("%s.idle-timeout", REPORT))));
report.setMaxLifetime(Long.valueOf(env.getProperty(String.format("%s.max-lifetime", REPORT))));
report.setLeakDetectionThreshold(Long.valueOf(env.getProperty(String.format("%s.leakDetection", REPORT))));
report.setAutoCommit(Boolean.valueOf(env.getProperty(String.format("%s.auto-commit", REPORT))));
report.setReadOnly(Boolean.valueOf(env.getProperty(String.format("%s.read-only", REPORT))));
try (HikariDataSource readDataSource = new HikariDataSource(report)) {
readDataSource.getConnection();
return readDataSource;
} catch (Exception e) {
log.warn("\n\n******REPORT DB NOT FOUND, CONNECTED TO MASTER DB *******\n\n");
return masterDataSource;
}
}
我有一个DataSourceInterceptor,用于将请求定向到报告数据库
@Slf4j
@Component
public class DataSourceInterceptor extends HandlerInterceptorAdapter {
protected static final String[] PREFIX_REPORT_DS = new String[] { "/admin/report/**", "/report/**" };
@Override
public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler)
throws Exception {
String uri = request.getRequestURI();
if (StringUtil.isStartWith(uri, StringUtil.trimStricks(PREFIX_REPORT_DS))) {
log.info("Redirect to Report Database for URL: {}", uri);
DataSourceRouting.setReportRoute();
}
return true;
}
}
以及一个DataSourceRouting,用于设置到从属数据库的路由
@Slf4j
public class DataSourceRouting extends AbstractRoutingDataSource {
private static final ThreadLocal<Route> routeContext = new ThreadLocal<>();
public enum Route {
MASTER, REPORT
}
public static void setReportRoute() {
routeContext.set(Route.REPORT);
}
@Override
protected Object determineCurrentLookupKey() {
return routeContext.get();
}
public void initDatasource(DataSource masterDs, DataSource reportDs) {
log.info("Datasource routing...");
Map<Object, Object> dataSourceMap = new HashMap<>();
dataSourceMap.put(Route.MASTER, masterDs);
dataSourceMap.put(Route.REPORT, reportDs);
this.setTargetDataSources(dataSourceMap);
this.setDefaultTargetDataSource(masterDs);
}
}
如果在应用程序启动期间从属数据库关闭或无法连接,则它会连接到主数据库,这样应用程序中就不会出现问题,但如果从属数据库已连接,并且应用程序尝试从从属数据库进行查询,则会引发上述异常
1条答案
按热度按时间carvr3hs1#
问题在于,正如评论中已经暗示的,你的配置。虽然过于广泛,但它也是错误的。错误在于你使用了一个try-with-resources,它会在块完成后自动关闭资源。所以最后你总是以一个关闭的数据源结束。不管它是否连接。
我强烈建议按照 Boot 首选的方式配置数据源。
url
、username
和password
属性应直接在数据源上设置,以防止启动失败。要处理断开的连接,请确保最小值/最大值不同,并且为验证设置了合理的
validationTimeout
。对于报表数据库,您可以添加initializationFailTimeout
并设置负值。这不会导致在启动时获取连接,而是在以后获取连接。这甚至允许您从断开的报表数据库开始,由于验证,稍后将提供。另一个错误是你从来没有重置路由键,因为它存储在
ThreadLocal
中,最终会被重用,因为请求处理线程在一个池中。你需要在HandlerInterceptor
的afterCompletion
方法中清除。