Paddle 验证paddle-GPU安装是否成功,输入paddle.utils.run_check()时,长时间无结果返回,求解!!

zfciruhq  于 2022-10-22  发布在  其他
关注(0)|答案(7)|浏览(253)

问题描述:

在系统是ubuntu16.04的电脑上安装paddle-2.1 gpu版本时,在命令行输入paddle.utils.run_check()验证时,除了出现一些如cuda10.2这样的版本外,基于没有别的东西了,一天都不会出来success这样的字样,也不报错,求解答呀,命令行如下:
import paddle
paddle.utils.run_check()
Running verify PaddlePaddle program ...
W0705 11:57:56.118161 1936 device_context.cc:404] Please NOTE: device: 0, GPU Compute Capability: 6.1, Driver API Version: 10.2, Runtime API Version: 10.2
W0705 11:57:56.120558 1936 device_context.cc:422] device: 0, cuDNN Version: 7.6.

安装环境:

ubuntu16.04(非虚拟机),cuda-10.2,cudnn-7.6.5,nividia-驱动440.36,gcc-8.2.0,python-3.7.9,pip-21.2.3

安装方式:

pip安装,pip-21.2.3

显卡型号:
GeForce GTX 1070Ti

9w11ddsr

9w11ddsr1#

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看 官网API文档常见问题历史IssueAI社区 来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

d8tt03nd

d8tt03nd2#

你好,麻烦使用 “GLOG_v=10 python” 执行,看看运行日志有没有信息。

不使用run_check检测安装,在单卡上跑个简单的网络,或者在GPU上创建个变量可以成功吗?

bybem2ql

bybem2ql3#

import paddle 出现的log如下:

import paddle
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0705 15:34:54.543409 5247 pybind.cc:443] conditional_block has no kernels, skip
I0705 15:34:54.543469 5247 pybind.cc:443] while has no kernels, skip
I0705 15:34:54.543538 5247 pybind.cc:443] recurrent has no kernels, skip
I0705 15:34:54.543572 5247 pybind.cc:443] py_func has no kernels, skip
I0705 15:34:54.570359 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_free_idle_chunk"
I0705 15:34:54.570410 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_free_when_no_cache_hit"
I0705 15:34:54.570464 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_eager_delete_tensor_gb"
I0705 15:34:54.570487 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_enable_parallel_graph"
I0705 15:34:54.570511 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_allocator_strategy"
I0705 15:34:54.570535 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_use_system_allocator"
I0705 15:34:54.570559 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_check_nan_inf"
I0705 15:34:54.570582 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_call_stack_level"
I0705 15:34:54.570605 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_sort_sum_gradient"
I0705 15:34:54.570626 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_cpu_deterministic"
I0705 15:34:54.570647 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_enable_rpc_profiler"
I0705 15:34:54.570669 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_multiple_of_cupti_buffer_size"
I0705 15:34:54.570691 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_reader_queue_speed_test_mode"
I0705 15:34:54.570713 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_pe_profile_fname"
I0705 15:34:54.570734 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_print_sub_graph_dir"
I0705 15:34:54.570756 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_fraction_of_cpu_memory_to_use"
I0705 15:34:54.570777 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_fuse_parameter_groups_size"
I0705 15:34:54.570799 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_fuse_parameter_memory_size"
I0705 15:34:54.570822 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_init_allocated_mem"
I0705 15:34:54.570854 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_initial_cpu_memory_in_mb"
I0705 15:34:54.570874 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_memory_fraction_of_eager_deletion"
I0705 15:34:54.570897 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_use_pinned_memory"
I0705 15:34:54.570919 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_benchmark"
I0705 15:34:54.570940 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_inner_op_parallelism"
I0705 15:34:54.570961 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_tracer_profile_fname"
I0705 15:34:54.570983 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_paddle_num_threads"
I0705 15:34:54.571004 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_use_mkldnn"
I0705 15:34:54.571025 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_max_inplace_grad_add"
I0705 15:34:54.571048 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_tracer_mkldnn_ops_on"
I0705 15:34:54.571069 5247 global_value_getter_setter.cc:271] Get substring: "FLAGS_tracer_mkldnn_ops_off"
I0705 15:34:54.584703 5247 dynamic_loader.cc:159] Set paddle lib path : /home/why/.local/lib/python3.7/site-packages/paddle/libs
I0705 15:34:56.660694 5247 pybind.cc:259] -- The size of all_ops: 786 --
I0705 15:34:56.660749 5247 pybind.cc:260] -- The size of supported_ops: 0 --
I0705 15:34:56.660784 5247 pybind.cc:261] -- The size of unsupported_ops: 786 --
I0705 15:34:56.691080 5247 init.cc:88] Before Parse: argc is 2, Init commandline: dummy --tryfromenv=check_nan_inf,benchmark,eager_delete_scope,fraction_of_cpu_memory_to_use,initial_cpu_memory_in_mb,init_allocated_mem,paddle_num_threads,dist_threadpool_size,eager_delete_tensor_gb,fast_eager_deletion_mode,memory_fraction_of_eager_deletion,allocator_strategy,reader_queue_speed_test_mode,print_sub_graph_dir,pe_profile_fname,inner_op_parallelism,enable_parallel_graph,fuse_parameter_groups_size,multiple_of_cupti_buffer_size,fuse_parameter_memory_size,tracer_profile_fname,dygraph_debug,use_system_allocator,enable_unused_var_check,free_idle_chunk,free_when_no_cache_hit,call_stack_level,sort_sum_gradient,max_inplace_grad_add,use_pinned_memory,cpu_deterministic,use_mkldnn,tracer_mkldnn_ops_on,tracer_mkldnn_ops_off
I0705 15:34:56.691208 5247 init.cc:95] After Parse: argc is 1
I0705 15:34:56.972317 5247 pybind.cc:259] -- The size of all_ops: 786 --
I0705 15:34:56.972343 5247 pybind.cc:260] -- The size of supported_ops: 38 --
I0705 15:34:56.972353 5247 pybind.cc:261] -- The size of unsupported_ops: 748 --
I0705 15:34:58.660431 5247 tracer.cc:39] Set current tracer: 0x2d4dbb0
I0705 15:34:58.660631 5247 imperative.cc:1521] Tracer(0x2d4dbb0) set expected place CPUPlace

pkln4tw6

pkln4tw64#

看log的最后 Tracer 里用的是CPUPlace,一般GPU可用的情况应该显示CUDAPlace。

执行run_check的log可以看下吗?

mwkjh3gx

mwkjh3gx5#

**>>> import paddle.fluid as fluid

fluid.install_check.run_check()**
Running Verify Fluid Program ...
I0706 15:27:31.271517 1681 op_desc.cc:677] begin to check attribute of fill_constant
I0706 15:27:31.271708 1681 op_desc.cc:683] CompileTime infer shape on fill_constant
I0706 15:27:31.271742 1681 op_desc.cc:699] From [] to [linear_0.w_0, ]
I0706 15:27:31.272056 1681 op_desc.cc:677] begin to check attribute of fill_constant
I0706 15:27:31.272074 1681 op_desc.cc:683] CompileTime infer shape on fill_constant
I0706 15:27:31.272079 1681 op_desc.cc:699] From [] to [linear_0.b_0, ]
I0706 15:27:31.272665 1681 op_desc.cc:677] begin to check attribute of matmul
I0706 15:27:31.272698 1681 op_desc.cc:683] CompileTime infer shape on matmul
I0706 15:27:31.272704 1681 op_desc.cc:699] From [inp, linear_0.w_0, ] to [linear_0.tmp_0, ]
I0706 15:27:31.272933 1681 op_desc.cc:677] begin to check attribute of elementwise_add
I0706 15:27:31.272974 1681 op_desc.cc:683] CompileTime infer shape on elementwise_add
I0706 15:27:31.272979 1681 op_desc.cc:699] From [linear_0.tmp_0, linear_0.b_0, ] to [linear_0.tmp_1, ]
I0706 15:27:31.280504 1681 op_desc.cc:677] begin to check attribute of reduce_sum
I0706 15:27:31.280521 1681 op_desc.cc:683] CompileTime infer shape on reduce_sum
I0706 15:27:31.280527 1681 op_desc.cc:699] From [linear_0.tmp_1, ] to [reduce_sum_0.tmp_0, ]
I0706 15:27:31.281018 1681 op_desc.cc:683] CompileTime infer shape on fill_constant
I0706 15:27:31.281028 1681 op_desc.cc:699] From [] to [reduce_sum_0.tmp_0@GRAD, ]
I0706 15:27:31.281088 1681 op_desc.cc:683] CompileTime infer shape on reduce_sum_grad
I0706 15:27:31.281107 1681 op_desc.cc:699] From [reduce_sum_0.tmp_0@GRAD, linear_0.tmp_1, ] to [linear_0.tmp_1@GRAD, ]
I0706 15:27:31.281188 1681 op_desc.cc:683] CompileTime infer shape on elementwise_add_grad
I0706 15:27:31.281193 1681 op_desc.cc:699] From [linear_0.tmp_1@GRAD, linear_0.tmp_0, linear_0.b_0, ] to [linear_0.tmp_0@GRAD, linear_0.b_0@GRAD, ]
I0706 15:27:31.281281 1681 op_desc.cc:683] CompileTime infer shape on matmul_grad
I0706 15:27:31.281287 1681 op_desc.cc:699] From [linear_0.tmp_0@GRAD, inp, linear_0.w_0, ] to [linear_0.w_0@GRAD, ]
I0706 15:27:31.317128 1681 executor.cc:109] Creating Variables for block 0
I0706 15:27:31.317140 1681 scope.cc:170] Create variable fetch
I0706 15:27:31.317144 1681 executor.cc:124] Create Variable fetch global, which pointer is 0x2f01b50
I0706 15:27:31.317149 1681 scope.cc:170] Create variable feed
I0706 15:27:31.317152 1681 executor.cc:124] Create Variable feed global, which pointer is 0x2ed95f0
I0706 15:27:31.317154 1681 scope.cc:170] Create variable linear_0.w_0
I0706 15:27:31.317162 1681 executor.cc:124] Create Variable linear_0.w_0 global, which pointer is 0x2b82850
I0706 15:27:31.317164 1681 scope.cc:170] Create variable linear_0.b_0
I0706 15:27:31.317167 1681 executor.cc:124] Create Variable linear_0.b_0 global, which pointer is 0x2d44770
W0706 15:27:31.317199 1681 device_context.cc:252] Please NOTE: device: 0, CUDA Capability: 61, Driver API Version: 11.3, Runtime API Version: 10.0
I0706 15:27:31.317301 1681 dynamic_loader.cc:115] Try to find library: libcudnn.so from default system path.
W0706 15:27:31.317312 1681 device_context.cc:260] device: 0, cuDNN Version: 7.6.
I0706 15:27:35.363448 1681 cuda_stream.cc:39] CUDAStream Init stream: 0x3414d40, priority: 2
I0706 15:27:35.363682 1681 dynamic_loader.cc:115] Try to find library: libcublas.so from default system path.
I0706 15:27:38.851601 1681 operator.cc:159] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[linear_0.b_0: 0 ]}.
I0706 15:27:38.879127 1681 operator.cc:1066] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN]
I0706 15:27:38.879195 1681 auto_growth_best_fit_allocator.cc:97] Not found and reallocate 512, and remaining 256
I0706 15:27:38.879272 1681 operator.cc:180] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[linear_0.b_0:float 3 ]}.
I0706 15:27:38.879295 1681 operator.cc:159] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[linear_0.w_0: 0 ]}.
I0706 15:27:38.879317 1681 operator.cc:1066] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN]
I0706 15:27:38.879346 1681 operator.cc:180] CUDAPlace(0) Op(fill_constant), inputs:{}, outputs:{Out[linear_0.w_0:float 2, 3 ]}.
I0706 15:27:38.879357 1681 executor.cc:77] destroy ExecutorPrepareContext
I0706 15:27:38.905817 1681 op_desc.cc:677] begin to check attribute of feed
I0706 15:27:38.906008 1681 op_desc.cc:677] begin to check attribute of fetch
I0706 15:27:38.906177 1681 op_desc.cc:677] begin to check attribute of fetch
I0706 15:27:38.906261 1681 auto_growth_best_fit_allocator.cc:97] Not found and reallocate 512, and remaining 256
I0706 15:27:38.906287 1681 feed_fetch_method.cc:30] SetFeedVariable name=feed index=0
I0706 15:27:38.906380 1681 executor_gc_helper.cc:120] Skip reference count computing of variable X(linear_0.tmp_1) in Operator reduce_sum_grad
I0706 15:27:38.906388 1681 executor_gc_helper.cc:120] Skip reference count computing of variable X(linear_0.tmp_0) in Operator elementwise_add_grad
I0706 15:27:38.906396 1681 executor.cc:109] Creating Variables for block 0
I0706 15:27:38.906401 1681 executor.cc:124] Create Variable fetch global, which pointer is 0x2f01b50
I0706 15:27:38.906404 1681 executor.cc:124] Create Variable feed global, which pointer is 0x2ed95f0
I0706 15:27:38.906409 1681 scope.cc:170] Create variable linear_0.w_0@GRAD
I0706 15:27:38.906412 1681 executor.cc:129] Create Variable linear_0.w_0@GRAD locally, which pointer is 0x314310e0
I0706 15:27:38.906416 1681 scope.cc:170] Create variable inp
I0706 15:27:38.906435 1681 executor.cc:129] Create Variable inp locally, which pointer is 0x31431220
I0706 15:27:38.906438 1681 executor.cc:124] Create Variable linear_0.b_0 global, which pointer is 0x2d44770
I0706 15:27:38.906443 1681 scope.cc:170] Create variable linear_0.tmp_1
I0706 15:27:38.906446 1681 executor.cc:129] Create Variable linear_0.tmp_1 locally, which pointer is 0x3142cf90
I0706 15:27:38.906450 1681 scope.cc:170] Create variable reduce_sum_0.tmp_0@GRAD
I0706 15:27:38.906453 1681 executor.cc:129] Create Variable reduce_sum_0.tmp_0@GRAD locally, which pointer is 0x3142cfb0
I0706 15:27:38.906456 1681 executor.cc:124] Create Variable linear_0.w_0 global, which pointer is 0x2b82850
I0706 15:27:38.906462 1681 scope.cc:170] Create variable linear_0.tmp_0
I0706 15:27:38.906466 1681 executor.cc:129] Create Variable linear_0.tmp_0 locally, which pointer is 0x3142d8d0
I0706 15:27:38.906469 1681 scope.cc:170] Create variable linear_0.tmp_0@GRAD
I0706 15:27:38.906472 1681 executor.cc:129] Create Variable linear_0.tmp_0@GRAD locally, which pointer is 0x31431700
I0706 15:27:38.906476 1681 scope.cc:170] Create variable linear_0.b_0@GRAD
I0706 15:27:38.906479 1681 executor.cc:129] Create Variable linear_0.b_0@GRAD locally, which pointer is 0x31431810
I0706 15:27:38.906482 1681 scope.cc:170] Create variable linear_0.tmp_1@GRAD
I0706 15:27:38.906486 1681 executor.cc:129] Create Variable linear_0.tmp_1@GRAD locally, which pointer is 0x31431920
I0706 15:27:38.906491 1681 scope.cc:170] Create variable reduce_sum_0.tmp_0
I0706 15:27:38.906494 1681 executor.cc:129] Create Variable reduce_sum_0.tmp_0 locally, which pointer is 0x31431a30
I0706 15:27:38.906505 1681 operator.cc:159] CUDAPlace(0) Op(feed), inputs:{X[feed: -1 ]}, outputs:{Out[inp: 0 ]}.
I0706 15:27:38.906514 1681 feed_op.cc:58] Feed variable feed's 0 column to variable inp
I0706 15:27:38.906522 1681 operator.cc:180] CUDAPlace(0) Op(feed), inputs:{X[feed: -1 ]}, outputs:{Out[inp:float 2, 2 ]}.
I0706 15:27:38.906531 1681 operator.cc:159] CUDAPlace(0) Op(matmul), inputs:{X[inp:float 2, 2 ], Y[linear_0.w_0:float 2, 3 ]}, outputs:{Out[linear_0.tmp_0: 0 ]}.
I0706 15:27:38.906541 1681 operator.cc:1066] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN]
I0706 15:27:38.906611 1681 operator.cc:180] CUDAPlace(0) Op(matmul), inputs:{X[inp:float 2, 2 ], Y[linear_0.w_0:float 2, 3 ]}, outputs:{Out[linear_0.tmp_0:float 2, 3 ]}.
I0706 15:27:38.906636 1681 operator.cc:159] CUDAPlace(0) Op(elementwise_add), inputs:{X[linear_0.tmp_0:float 2, 3 ], Y[linear_0.b_0:float 3 ]}, outputs:{Out[linear_0.tmp_1: 0 ]}.
I0706 15:27:38.906647 1681 operator.cc:1066] expected_kernel_key:data_type[float]:data_layout[ANY_LAYOUT]:place[CUDAPlace(0)]:library_type[PLAIN]
I0706 15:27:39.118528 1681 executor.cc:77] destroy ExecutorPrepareContext
/usr/local/lib/python3.7/site-packages/paddle/fluid/executor.py:1070: UserWarning: The following exception is not an EOF exception.
"The following exception is not an EOF exception.")
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/install_check.py", line 124, in run_check
test_simple_exe()
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/install_check.py", line 122, in test_simple_exe
fetch_list=[out0.name, param_grads[1].name])
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1071, in run
six.reraise(*sys.exc_info())
File "/home/wyh/.local/lib/python3.7/site-packages/six.py", line 719, in reraise
raise value
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1066, in run
return_merged=return_merged)
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1154, in _run_impl
use_program_cache=use_program_cache)
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/executor.py", line 1229, in _run_program
fetch_var_name)
paddle.fluid.core_avx.EnforceNotMet:

C++ Call Stacks (More useful to developers):

0 std::string paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1 paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2 paddle::platform::RecordedCudaMalloc(void**, unsigned long, int)
3 paddle::memory::allocation::CUDAAllocator::AllocateImpl(unsigned long)
4 paddle::memory::allocation::AlignedAllocator::AllocateImpl(unsigned long)
5 paddle::memory::allocation::AutoGrowthBestFitAllocator::AllocateImpl(unsigned long)
6 paddle::memory::allocation::RetryAllocator::AllocateImpl(unsigned long)
7 paddle::memory::allocation::AllocatorFacade::Alloc(paddle::platform::Place const&, unsigned long)
8 paddle::memory::allocation::AllocatorFacade::AllocShared(paddle::platform::Place const&, unsigned long)
9 paddle::memory::AllocShared(paddle::platform::Place const&, unsigned long)
10 paddle::framework::Tensor::mutable_data(paddle::platform::Place const&, paddle::framework::proto::VarType_Type, unsigned long)
11 paddle::operators::ElementwiseAddKernel<paddle::platform::CUDADeviceContext, float>::Compute(paddle::framework::ExecutionContext const&) const
12 std::_Function_handler<void (paddle::framework::ExecutionContext const&), paddle::framework::OpKernelRegistrarFunctor<paddle::platform::CUDAPlace, false, 0ul, paddle::operators::ElementwiseAddKernel<paddle::platform::CUDADeviceContext, float>, paddle::operators::ElementwiseAddKernel<paddle::platform::CUDADeviceContext, double>, paddle::operators::ElementwiseAddKernel<paddle::platform::CUDADeviceContext, int>, paddle::operators::ElementwiseAddKernel<paddle::platform::CUDADeviceContext, long>, paddle::operators::ElementwiseAddKernel<paddle::platform::CUDADeviceContext, paddle::platform::float16> >::operator()(char const*, char const*, int) const::{lambda(paddle::framework::ExecutionContext const&) #1 }>::_M_invoke(std::_Any_data const&, paddle::framework::ExecutionContext const&)
13 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&, paddle::framework::RuntimeContext*) const
14 paddle::framework::OperatorWithKernel::RunImpl(paddle::framework::Scope const&, paddle::platform::Place const&) const
15 paddle::framework::OperatorBase::Run(paddle::framework::Scope const&, paddle::platform::Place const&)
16 paddle::framework::Executor::RunPartialPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, long, long, bool, bool, bool)
17 paddle::framework::Executor::RunPreparedContext(paddle::framework::ExecutorPrepareContext*, paddle::framework::Scope*, bool, bool, bool)
18 paddle::framework::Executor::Run(paddle::framework::ProgramDesc const&, paddle::framework::Scope*, int, bool, bool, std::vector<std::string, std::allocatorstd::string > const&, bool, bool)

Python Call Stacks (More useful to users):

File "/usr/local/lib/python3.7/site-packages/paddle/fluid/framework.py", line 2610, in append_op
attrs=kwargs.get("attrs", None))
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/layer_object_helper.py", line 52, in append_op
stop_gradient=stop_gradient)
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/nn.py", line 971, in forward
attrs={'axis': len(input.shape) - 1})
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 461, incall
outputs = self.forward(inputs,kwargs)
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/install_check.py", line 41, in forward
x = self._linear1(inputs)
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/dygraph/layers.py", line 461, in
call
*
outputs = self.forward(*inputs,**kwargs)
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/install_check.py", line 113, in test_simple_exe
out0 = simple_layer0(inp0)
File "/usr/local/lib/python3.7/site-packages/paddle/fluid/install_check.py", line 124, in run_check
test_simple_exe()
File "", line 1, in

Error Message Summary:

ExternalError: Cuda error(74), misaligned address.
[Advise: The device encountered a load or store instruction on a memory address which is not aligned. This leaves the process in aninconsistent state and any further CUDA work will return the same error. To continue using CUDA, the process must be terminatedand relaunched.] at (/paddle/paddle/fluid/platform/gpu_info.cc:308)
[operator < elementwise_add > error]

kgqe7b3p

kgqe7b3p6#

请问解决了吗?我也遇到了这个问题 显示完成device:0 cuDNN version:7.6之后 就长时间进入等待

mrphzbgm

mrphzbgm7#

请问解决了吗,我之前也好用,突然不好用了

相关问题