From tom.eulenfeld at uni-jena.de Mon Feb 12 10:13:33 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Mon, 12 Feb 2018 10:13:33 +0100 Subject: [yam] configuration and data load In-Reply-To: References: Message-ID: Hello Yawar, welcome to the mailing list and sorry I made it not clear enough. To post on the mailing list, the topic must be enclosed by square brackets. Hopefully, it is a sufficient measure against spam. Regarding your question to load the data; please try the following expression for "data" option: "your_data_path/DF01.{station}.{channel}/{t.year}.{t.julday:03d}.*.sac" "data_format" option should be set to "SAC", of course. Best regards! Tom > ---------- Forwarded message ---------- > From: ** > > Date: Sun, Feb 11, 2018 at 9:38 PM > Subject: Yam! configuration and data load > > > Message rejected by filter rule match > > > > ---------- Forwarded message ---------- > To: seistools at listserv.uni-jena.de > Date: Sun, 11 Feb 2018 21:38:07 -0200 > Subject: Yam! configuration and data load > Hello Tom, > > I trust this mail find you in the best. > > I  have successfully installed 'Yam'. Now I have problems to configure > my data and consequently data load in Yam. I have hourly data files > having BUD data structure 'DF01.FDF.BHE.2016.270.18.sac'. > > Please, help me to configure my data. > > Thanks > > > > > Regards, > Yawar Hussain > Ph.D. Student, Geotechnical Engineering > University of Brasilia, Brazil. > > From tom.eulenfeld at uni-jena.de Tue Mar 6 11:14:51 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Tue, 6 Mar 2018 11:14:51 +0100 Subject: [yam] multi-core problem Message-ID: <60fb68f9-474a-ab51-ee7c-08f85f75ee41@uni-jena.de> Hello Weijun, sorry, your mail got somehow lost by the Mailman instance. I attach it below. Regarding your problem: 1. Did you run yam-runtests? Does it show the same error? Which operating system are you using? 2. Is your installation up to date? Check yam --version. The latest version is 0.3.0. 3. If you are already on the latest version. Can you try out the development version of yam? You can install dev with pip install https://github.com/trichter/yam/archive/master.zip Recently, I reworked how things are written to the HDF5 file. In version 0.3.0 and prior versions an extra process was spanned just for writing into HDF5 files to circumvent the concurrent writing problem. In the dev version writing is done from the main process which is simpler and less error prone. Best, Tom -------- Forwarded Message -------- Hello, Yawar, When I run yam with multi-core, errors frequently appear as a example following. It should be the problem about concurrent writting to hdf5 file in commands.py. I am not familar with hdf5, so don't know whether the website( http://docs.h5py.org/en/latest/swmr.html) and "Multiprocess concurrent write and read" segment can help. Thanks, ----------------------------------------- $ yam correlate 1b --------------------error message-------------------------------- 20%|████████▌ | 75/366 [02:52<11:08, 2.30s/it]Traceback (most recent call last): File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/h5py/_hl/files.py", line 111, in make_fid fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 78, in h5py.h5f.open OSError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/wwj/anaconda3/envs/obspy/bin/yam", line 11, in load_entry_point('yam', 'console_scripts', 'yam')() File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 388, in run_cmdline run(**args) File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 147, in run run2(command, **args) File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 211, in run2 yam.commands.start_correlate(io, **args) File "/home/wwj/old/gits/obspy/yam/yam/commands.py", line 168, in start_correlate _write_stream(result) File "/home/wwj/old/gits/obspy/yam/yam/commands.py", line 156, in _write_stream result[key].write(io[key], 'H5', mode='a') File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/stream.py", line 1443, in write write_format(self, filename, **kwargs) File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/obspyh5.py", line 186, in writeh5 with h5py.File(fname, mode, libver='latest') as f: File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/h5py/_hl/files.py", line 269, in __init__ fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr) File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/h5py/_hl/files.py", line 113, in make_fid fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 98, in h5py.h5f.create OSError: Unable to create file (unable to open file: name = 'corr.h5', errno = 17, error message = 'File exists', flags = 15, o_flags = c2) -- Weijun Wang Institute of Earthquake Forecasting, China Earthquake Administration Beijing, China From tom.eulenfeld at uni-jena.de Tue Mar 6 16:49:27 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Tue, 6 Mar 2018 16:49:27 +0100 Subject: [yam] multi-core problem In-Reply-To: <29bf0ca3.1be.161fba3bd35.Coremail.wjwang@cea-ies.ac.cn> References: <60fb68f9-474a-ab51-ee7c-08f85f75ee41@uni-jena.de> <449a74f.1b6.161faf67ac5.Coremail.wjwang@cea-ies.ac.cn> <9f534451-fc89-d2b4-becf-0a9fa3ac8faa@uni-jena.de> <29bf0ca3.1be.161fba3bd35.Coremail.wjwang@cea-ies.ac.cn> Message-ID: Hi Weijun, I am also writing to the mailing list. Maybe others face similar problems in the future. Yes, the output is not very helpful. I've seen that you run Python 3.6.1 and I found this bug which might be related: https://bugs.python.org/issue28699 Can you try to upgrade your Python installation? I suggest to use Anaconda. This probably will not fix the failure, but it might resolve the dead lock and give a more meaningful error message. Cheers! Tom On 06.03.2018 15:07, Weijun Wang wrote: > Hi, Tom, > > I am not sure which line I should send to you, so copy all the outputs to you. Sorry it looks like still no useful information. > > Thanks, > > Weijun. > > __________________ > > (obspy) [wwj at t570 yam_test]$ yam-runtests -v ... >> yam correlate 1 -vvv > CLI tests passed: 35%|██████████████████████████████████████████████████████▊ | 26/74 [00:20<00:19, 2.43it/s] > ***CTRL+C here*** > > CLI tests passed: 36%|████████████████████████████████████████████████████████▉ | 27/74 [03:48<17:41, 22.58s/it]Traceback (most recent call last): > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", line 684, in next > item = self._items.popleft() > IndexError: pop from an empty deque > > During handling of the above exception, another exception occurred: > > Traceback (most recent call last): > File "/home/wwj/anaconda3/envs/obspy/bin/yam-runtests", line 11, in > load_entry_point('yam', 'console_scripts', 'yam-runtests')() > File "/home/wwj/old/gits/obspy/yam/yam/tests/__init__.py", line 27, in run > ret = not runner.run(suite).wasSuccessful() > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/runner.py", line 176, in run > test(result) > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 84, in __call__ > return self.run(*args, **kwds) > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 122, in run > test(result) > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 84, in __call__ > return self.run(*args, **kwds) > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 122, in run > test(result) > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 84, in __call__ > return self.run(*args, **kwds) > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 122, in run > test(result) > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/case.py", line 649, in __call__ > return self.run(*args, **kwds) > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/case.py", line 601, in run > testMethod() > File "/home/wwj/old/gits/obspy/yam/yam/tests/test_main.py", line 168, in test_cli > self.out('correlate 1') # takes long > File "/home/wwj/old/gits/obspy/yam/yam/tests/test_main.py", line 82, in out > self.script(cmd.split()) > File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 388, in run_cmdline > run(**args) > File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 147, in run > run2(command, **args) > File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 211, in run2 > yam.commands.start_correlate(io, **args) > File "/home/wwj/old/gits/obspy/yam/yam/commands.py", line 167, in start_correlate > total=len(tasks)): > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", line 959, in __iter__ > for obj in iterable: > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", line 688, in next > self._cond.wait(timeout) > File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/threading.py", line 295, in wait > waiter.acquire() > KeyboardInterrupt > > > >> -----原始邮件----- >> 发件人: "Tom Eulenfeld" >> 发送时间: 2018-03-06 21:00:48 (星期二) >> 收件人: "Weijun Wang" >> 抄送: >> 主题: Re: [yam] multi-core problem >> >> Hi Weijun, >> >> good to hear that it is at least working for a single core. >> >> Unfortunately, I cannot reproduce your error. I think the child process >> is dying somehow. Can you please post the last view lines of >> yam-runtests -v >> >> I think I need to add more debug statements in the code to find the bug. >> >> Cheers! >> Tom >> >> >> On 06.03.2018 11:58, Weijun Wang wrote: >>> >>> Hi, Tom, >>> >>> Sorry I got your name wrong at my first email. >>> >>> the enviroments I run are: >>> >>> OS: CentOS Linux release 7.4.1708 (Core) >>> Python: 3.6.1 >>> obspy: 1.1.0 py36_1 conda-forge >>> obspyh5: 0.3.2 >>> yam: 0.3.1-dev >>> >>> >>> yes,the error messages I posted before were come from running the demo notebooks( notebooks yam_velocity_variations_patcx ) . >>> yam-runtests got stuck at somewhere, such as: >>> ----------------------------------- >>> (obspy) [wwj at t570 yam_test]$ yam-runtests >>> CLI tests passed: 32%|██████████████████████████████████████████████████▌ | 24/74 [00:17<00:38, 1.30it/s] >>> ----------------------------------- >>> and will never continue, when I ctrl+c, will get: >>> ------------------------------------- >>> Traceback (most recent call last): >>> File "/home/wwj/anaconda3/envs/obspy/bin/yam-runtests", line 11, in >>> load_entry_point('yam', 'console_scripts', 'yam-runtests')() >>> File "/home/wwj/old/gits/obspy/yam/yam/tests/__init__.py", line 27, in run >>> ret = not runner.run(suite).wasSuccessful() >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/runner.py", line 176, in run >>> test(result) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 84, in __call__ >>> return self.run(*args, **kwds) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 122, in run >>> test(result) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 84, in __call__ >>> return self.run(*args, **kwds) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 122, in run >>> test(result) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 84, in __call__ >>> return self.run(*args, **kwds) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/suite.py", line 122, in run >>> test(result) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/case.py", line 649, in __call__ >>> return self.run(*args, **kwds) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/unittest/case.py", line 601, in run >>> testMethod() >>> File "/home/wwj/old/gits/obspy/yam/yam/tests/test_main.py", line 168, in test_cli >>> self.out('correlate 1') # takes long >>> File "/home/wwj/old/gits/obspy/yam/yam/tests/test_main.py", line 82, in out >>> self.script(cmd.split()) >>> File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 388, in run_cmdline >>> run(**args) >>> File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 147, in run >>> run2(command, **args) >>> File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 211, in run2 >>> yam.commands.start_correlate(io, **args) >>> File "/home/wwj/old/gits/obspy/yam/yam/commands.py", line 167, in start_correlate >>> total=len(tasks)): >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", line 959, in __iter__ >>> for obj in iterable: >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", line 688, in next >>> self._cond.wait(timeout) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/threading.py", line 295, in wait >>> waiter.acquire() >>> KeyboardInterrupt >>> ^CError in atexit._run_exitfuncs: >>> Traceback (most recent call last): >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/multiprocessing/util.py", line 254, in _run_finalizers >>> finalizer() >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/multiprocessing/util.py", line 186, in __call__ >>> res = self._callback(*self._args, **self._kwargs) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", line 535, in _terminate_pool >>> cls._help_stuff_finish(inqueue, task_handler, len(pool)) >>> File "/home/wwj/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", line 520, in _help_stuff_finish >>> inqueue._rlock.acquire() >>> KeyboardInterruptij >>> ---------------------------------- >>> >>> thanks, >>> >>> Weijun. >>> >>> >>> >>>> -----原始邮件----- >>>> 发件人: "Tom Eulenfeld" >>>> 发送时间: 2018-03-06 18:14:51 (星期二) >>>> 收件人: seistools at listserv.uni-jena.de >>>> 抄送: wjwang at cea-ies.ac.cn >>>> 主题: Re: [yam] multi-core problem >>>> >>>> Hello Weijun, >>>> >>>> sorry, your mail got somehow lost by the Mailman instance. I attach it >>>> below. >>>> >>>> Regarding your problem: >>>> >>>> 1. Did you run yam-runtests? Does it show the same error? Which >>>> operating system are you using? >>>> 2. Is your installation up to date? Check yam --version. The latest >>>> version is 0.3.0. >>>> 3. If you are already on the latest version. Can you try out the >>>> development version of yam? You can install dev with >>>> >>>> pip install https://github.com/trichter/yam/archive/master.zip >>>> >>>> Recently, I reworked how things are written to the HDF5 file. In version >>>> 0.3.0 and prior versions an extra process was spanned just for writing >>>> into HDF5 files to circumvent the concurrent writing problem. In the dev >>>> version writing is done from the main process which is simpler and less >>>> error prone. >>>> >>>> Best, >>>> Tom >>>> >>>> >>>> >>>> -------- Forwarded Message -------- >>>> >>>> Hello, Yawar, >>>> When I run yam with multi-core, errors frequently appear as a example >>>> following. It should be the problem about concurrent writting to hdf5 >>>> file in commands.py. I am not familar with hdf5, so don't know whether >>>> the website( http://docs.h5py.org/en/latest/swmr.html) and >>>> "Multiprocess concurrent write and read" segment can help. >>>> Thanks, >>>> >>>> ----------------------------------------- >>>> >>>> $ yam correlate 1b >>>> >>>> --------------------error message-------------------------------- >>>> 20%|████████▌ | 75/366 [02:52<11:08, >>>> 2.30s/it]Traceback (most recent call last): >>>> File >>>> "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/h5py/_hl/files.py", >>>> line 111, in make_fid >>>> fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl) >>>> File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper >>>> File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper >>>> File "h5py/h5f.pyx", line 78, in h5py.h5f.open >>>> OSError: Unable to open file (unable to lock file, errno = 11, error >>>> message = 'Resource temporarily unavailable') >>>> >>>> During handling of the above exception, another exception occurred: >>>> >>>> Traceback (most recent call last): >>>> File "/home/wwj/anaconda3/envs/obspy/bin/yam", line 11, in >>>> load_entry_point('yam', 'console_scripts', 'yam')() >>>> File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 388, in run_cmdline >>>> run(**args) >>>> File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 147, in run >>>> run2(command, **args) >>>> File "/home/wwj/old/gits/obspy/yam/yam/main.py", line 211, in run2 >>>> yam.commands.start_correlate(io, **args) >>>> File "/home/wwj/old/gits/obspy/yam/yam/commands.py", line 168, in >>>> start_correlate >>>> _write_stream(result) >>>> File "/home/wwj/old/gits/obspy/yam/yam/commands.py", line 156, in >>>> _write_stream >>>> result[key].write(io[key], 'H5', mode='a') >>>> File >>>> "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/stream.py", >>>> line 1443, in write >>>> write_format(self, filename, **kwargs) >>>> File >>>> "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/obspyh5.py", >>>> line 186, in writeh5 >>>> with h5py.File(fname, mode, libver='latest') as f: >>>> File >>>> "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/h5py/_hl/files.py", >>>> line 269, in __init__ >>>> fid = make_fid(name, mode, userblock_size, fapl, swmr=swmr) >>>> File >>>> "/home/wwj/anaconda3/envs/obspy/lib/python3.6/site-packages/h5py/_hl/files.py", >>>> line 113, in make_fid >>>> fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl) >>>> File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper >>>> File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper >>>> File "h5py/h5f.pyx", line 98, in h5py.h5f.create >>>> OSError: Unable to create file (unable to open file: name = 'corr.h5', >>>> errno = 17, error message = 'File exists', flags = 15, o_flags = c2) >>>> >>>> >>>> -- >>>> Weijun Wang >>>> >>>> Institute of Earthquake Forecasting, China Earthquake Administration >>>> Beijing, China -- Dr. Tom Eulenfeld Institute for Geosciences Friedrich-Schiller-University Jena From yawar.pgn at gmail.com Wed Apr 11 17:10:27 2018 From: yawar.pgn at gmail.com (Yawar Hussain) Date: Wed, 11 Apr 2018 12:10:27 -0300 Subject: [yam] data load Message-ID: Hello, Tom I have a problem with data load. Please, have a look at details and please let me know where I wrong. yam-master/yam$ yam info Stations: Not found Raw data (expression for day files): example_data/{network}.{station}.{location}.{channel}__{t.year}{t.month:02d }{t.day:02d}*.sac 0 files found Config ids: c Corr: 1, 1a, auto s Stack: 1, 2 t Stretch: 1, 2 Correlations (channel combinations, correlations calculated): None Stacks: None Stretching matrices: None My files: FDF.DF01.BHZ.2016.308.00.sac FDF.DF01.BHZ.2016.309.00.sac FDF.DF01.BHZ.2016.310.00.sac FDF.DF02.BHZ.2016.308.00.sac FDF.DF02.BHZ.2016.309.00.sac FDF.DF02.BHZ.2016.310.00.sac My JSON file (attached) and it is placed: /home/yawar/Downloads/yam-master/yam while data folder is: /home/yawar/Downloads/yam-master/yam/example_data Regards, Yawar Hussain Ph.D. Student, Geotechnical Engineering University of Brasilia, Brazil. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: conf.json Type: application/json Size: 8353 bytes Desc: not available URL: From tom.eulenfeld at uni-jena.de Thu Apr 12 01:04:39 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Thu, 12 Apr 2018 01:04:39 +0200 Subject: [yam] data load In-Reply-To: Message-ID: <20180412010439.Horde.LA7wWpBTahKVXhRt13rmXm9@webmail.uni-jena.de> Hello Yawar, to make yam find your data you need to adapt the "data" config to your file naming convention: "example_data/{network}.{station}.{channel}.{t.year}.{t.julday:03d}.00.sac" Additionally, you will need a STATIONXML or similar file with coordinates and channels of your stations ("inventory" config). Best, Tom Zitat von Yawar Hussain : > Hello, > Tom > I have a problem with data load. > > Please, have a look at details and please let me know where I wrong. > > > yam-master/yam$ yam info > Stations: > Not found > Raw data (expression for day files): > > example_data/{network}.{station}.{location}.{channel}__{t.year}{t.month:02d > }{t.day:02d}*.sac > 0 files found > Config ids: > c Corr: 1, 1a, auto > s Stack: 1, 2 > t Stretch: 1, 2 > Correlations (channel combinations, correlations calculated): > None > Stacks: > None > Stretching matrices: > None > > My files: > > FDF.DF01.BHZ.2016.308.00.sac > FDF.DF01.BHZ.2016.309.00.sac > FDF.DF01.BHZ.2016.310.00.sac > FDF.DF02.BHZ.2016.308.00.sac > FDF.DF02.BHZ.2016.309.00.sac > FDF.DF02.BHZ.2016.310.00.sac > > My JSON file (attached) and it is placed: > /home/yawar/Downloads/yam-master/yam > while data folder is: /home/yawar/Downloads/yam-master/yam/example_data > > > > > Regards, > Yawar Hussain > Ph.D. Student, Geotechnical Engineering > University of Brasilia, Brazil. From tom.eulenfeld at uni-jena.de Wed May 2 14:22:22 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Wed, 2 May 2018 14:22:22 +0200 Subject: [qopen] error on MacOS with multiprocessing / data load In-Reply-To: References: <91a1a6db-25b9-0f86-e8a0-b0c09010682c@uni-jena.de> Message-ID: <2fb73dc6-86cf-9b09-e449-ecbf9a65a0b1@uni-jena.de> Hi Manuel, let's move the conversation to the mailing list as other people might be interested, too. The work-around confirms that the __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNTIONALITY___YOU_MUST_EXEC__() error on MacOS has to do with multiprocessing. Still it's a bit disappointing to use only one core if more are available. After some googling I found out, that it is probably also related to the maxosx backend of matplotlib which does not allow to plot in several subprocesses. Can you please try another backend by starting Qopen with python -c "import matplotlib as mpl; import qopen; mpl.use('agg'); qopen.core.run_cmdline()" It should be possible to use SAC files, but additionally you'll need Event and Station XML files. If you need further assistance, let me know. Cheers, Tom On 02.05.2018 13:56, Jaimes Caballero, Manuel Alejandro wrote: > Dr. Eulenfeld, > > Thanks for your quick reply!, the program seems to work like that and it > takes less than 5 minutes. I'm wondering if it'd be possible to use .SAC > files as the input files, or would I need a .xml file for stations and > events?, I could not find anything regarding that on seistools, I found > that there was a similar procedure for the program yam but I do not know > if it applies the same to qopen? > > Thanks a lot, > > Manuel Jaimes. > > On Wed, May 2, 2018 at 6:02 AM, Tom Eulenfeld > wrote: > > In the meantime you can try > > qopen --njobs 1 > > which does not use the multiprocessing module. > > Cheers, > Tom > > > > On 02.05.2018 10:25, Tom Eulenfeld wrote: > > Hi Manuel, > > no, I've never encountered this issue before. > > Do you mind opening a new ticket on github? > https://github.com/trichter/qopen/issues > > > Cheers! > Tom > > > On 01.05.2018 17:19, Jaimes Caballero, Manuel Alejandro wrote: > > Hi Dr. Eulenfeld, > > I have been trying to use your software qopen, the > installation process and everything until qopen --tutorial > works perfectly, when I try to run qopen in the terminal it > gives an error which says: Break on > __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNTIONALITY___YOU_MUST_EXEC__() > to debug. > > Have you encountered this issue before or do you know what > might be the cause? I'm using macOS. > > Thanks for your time. > > Manuel Jaimes. > > From majc78 at mun.ca Wed May 2 15:48:35 2018 From: majc78 at mun.ca (Jaimes Caballero, Manuel Alejandro) Date: Wed, 2 May 2018 11:18:35 -0230 Subject: [qopen] error on MacOS with multiprocessing / data load In-Reply-To: <2fb73dc6-86cf-9b09-e449-ecbf9a65a0b1@uni-jena.de> References: <91a1a6db-25b9-0f86-e8a0-b0c09010682c@uni-jena.de> <2fb73dc6-86cf-9b09-e449-ecbf9a65a0b1@uni-jena.de> Message-ID: Hi Dr. Eulenfeld, That sounds perfect. The line that you provided seems to fix the issue. do you know if event and stations XML files can be created from .SAC files, I know all the information is contained in the header but I do not know if the XML files would have to be created from scratch. Thanks for your cooperation, Manuel Jaimes. On Wed, May 2, 2018 at 9:52 AM, Tom Eulenfeld wrote: > Hi Manuel, > > let's move the conversation to the mailing list as other people might be > interested, too. > > The work-around confirms that the > __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDAT > ION_FUNTIONALITY___YOU_MUST_EXEC__() > error on MacOS has to do with multiprocessing. Still it's a bit > disappointing to use only one core if more are available. After some > googling I found out, that it is probably also related to the maxosx > backend of matplotlib which does not allow to plot in several subprocesses. > Can you please try another backend by starting Qopen with > > python -c "import matplotlib as mpl; import qopen; mpl.use('agg'); > qopen.core.run_cmdline()" > > > It should be possible to use SAC files, but additionally you'll need Event > and Station XML files. If you need further assistance, let me know. > > Cheers, > Tom > > > > On 02.05.2018 13:56, Jaimes Caballero, Manuel Alejandro wrote: > >> Dr. Eulenfeld, >> >> Thanks for your quick reply!, the program seems to work like that and it >> takes less than 5 minutes. I'm wondering if it'd be possible to use .SAC >> files as the input files, or would I need a .xml file for stations and >> events?, I could not find anything regarding that on seistools, I found >> that there was a similar procedure for the program yam but I do not know if >> it applies the same to qopen? >> >> Thanks a lot, >> >> Manuel Jaimes. >> >> On Wed, May 2, 2018 at 6:02 AM, Tom Eulenfeld > > wrote: >> >> In the meantime you can try >> >> qopen --njobs 1 >> >> which does not use the multiprocessing module. >> >> Cheers, >> Tom >> >> >> >> On 02.05.2018 10:25, Tom Eulenfeld wrote: >> >> Hi Manuel, >> >> no, I've never encountered this issue before. >> >> Do you mind opening a new ticket on github? >> https://github.com/trichter/qopen/issues >> >> >> Cheers! >> Tom >> >> >> On 01.05.2018 17:19, Jaimes Caballero, Manuel Alejandro wrote: >> >> Hi Dr. Eulenfeld, >> >> I have been trying to use your software qopen, the >> installation process and everything until qopen --tutorial >> works perfectly, when I try to run qopen in the terminal it >> gives an error which says: Break on >> __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDAT >> ION_FUNTIONALITY___YOU_MUST_EXEC__() >> to debug. >> >> Have you encountered this issue before or do you know what >> might be the cause? I'm using macOS. >> >> Thanks for your time. >> >> Manuel Jaimes. >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.eulenfeld at uni-jena.de Wed May 2 16:23:56 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Wed, 2 May 2018 16:23:56 +0200 Subject: [qopen] error on MacOS with multiprocessing / data load In-Reply-To: References: <91a1a6db-25b9-0f86-e8a0-b0c09010682c@uni-jena.de> <2fb73dc6-86cf-9b09-e449-ecbf9a65a0b1@uni-jena.de> Message-ID: Hi Manuel, yes, if you only have the SAC files, you'll need to iterate over them and create the XML files from scratch. Here is the relevant ObsPy code for stationXML http://docs.obspy.org/tutorial/code_snippets/stationxml_file_from_scratch.html Similar code for the events is in the Obspy codebase (i.e. obspy/io/sh/evt.py). Hope it helps! Tom On 02.05.2018 15:48, Jaimes Caballero, Manuel Alejandro wrote: > Hi Dr. Eulenfeld, > > That sounds perfect. The line that you provided seems to fix the issue. > do you know if event and stations XML files can be created from .SAC > files, I know all the information is contained in the header but I do > not know if the XML files would have to be created from scratch. > > Thanks for your cooperation, > > Manuel Jaimes. > > On Wed, May 2, 2018 at 9:52 AM, Tom Eulenfeld > wrote: > > Hi Manuel, > > let's move the conversation to the mailing list as other people > might be interested, too. > > The work-around confirms that the > __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNTIONALITY___YOU_MUST_EXEC__() > error on MacOS has to do with multiprocessing. Still it's a bit > disappointing to use only one core if more are available. After some > googling I found out, that it is probably also related to the maxosx > backend of matplotlib which does not allow to plot in several > subprocesses. > Can you please try another backend by starting Qopen with > > python -c "import matplotlib as mpl; import qopen; mpl.use('agg'); > qopen.core.run_cmdline()" > > > It should be possible to use SAC files, but additionally you'll need > Event and Station XML files. If you need further assistance, let me > know. > > Cheers, > Tom > > > > On 02.05.2018 13:56, Jaimes Caballero, Manuel Alejandro wrote: > > Dr. Eulenfeld, > > Thanks for your quick reply!, the program seems to work like > that and it takes less than 5 minutes. I'm wondering if it'd be > possible to use .SAC files as the input files, or would I need a > .xml file for stations and events?, I could not find anything > regarding that on seistools, I found that there was a similar > procedure for the program yam but I do not know if it applies > the same to qopen? > > Thanks a lot, > > Manuel Jaimes. > > On Wed, May 2, 2018 at 6:02 AM, Tom Eulenfeld > > >> wrote: > >     In the meantime you can try > >     qopen --njobs 1 > >     which does not use the multiprocessing module. > >     Cheers, >     Tom > > > >     On 02.05.2018 10:25, Tom Eulenfeld wrote: > >         Hi Manuel, > >         no, I've never encountered this issue before. > >         Do you mind opening a new ticket on github? > https://github.com/trichter/qopen/issues > >         > > >         Cheers! >         Tom > > >         On 01.05.2018 17:19, Jaimes Caballero, Manuel Alejandro > wrote: > >             Hi Dr. Eulenfeld, > >             I have been trying to use your software qopen, the >             installation process and everything until qopen > --tutorial >             works perfectly, when I try to run qopen in the > terminal it >             gives an error which says: Break on > > __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNTIONALITY___YOU_MUST_EXEC__() >             to debug. > >             Have you encountered this issue before or do you > know what >             might be the cause? I'm using macOS. > >             Thanks for your time. > >             Manuel Jaimes. > > > From tom.eulenfeld at uni-jena.de Fri May 4 10:27:41 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Fri, 4 May 2018 10:27:41 +0200 Subject: [qopen] alternative green's function In-Reply-To: References: <91a1a6db-25b9-0f86-e8a0-b0c09010682c@uni-jena.de> <2fb73dc6-86cf-9b09-e449-ecbf9a65a0b1@uni-jena.de> <31aba45a-d5e8-f79a-fd61-bc1880f56125@uni-jena.de> Message-ID: Hi Manuel, I tried to tweak the mailing list settings a bit. Please try again :) The only constraint should be that "[qopen]" is part of the subject. I pushed some commits to the repository which should solve the multiprocessing problem on MacOS. Can you install the dev version with pip uninstall qopen pip install https://github.com/trichter/qopen/archive/master.zip and confirm that it is working? Then, I can publish a new version. The fix basically loads a non-interactive backend for matplotlib by default. You can do this on your own with import matplotlib matplotlib.use('agg') from qopen import run run(conf="conf.json") Best! Tom On 03.05.2018 19:22, Jaimes Caballero, Manuel Alejandro wrote: > Hi Dr. Eulenfeld, > > I tried sending it to the mailing list but it was rejected for some > reason. Thanks for the quick response.  I'm trying to run the command > that is on the qopen website via python : from qopen import run . Then I > execute run(conf="conf.json") but the multiprocessing issue comes up > again, is there a similar way to execute the run line as with > run_cmdline(). > Feel free through email me back through the mailing list, I don't quite > know how it works yet. > > Thanks, > > Manuel Jaimes. > > On Thu, May 3, 2018 at 5:14 AM, Tom Eulenfeld > wrote: > > Hi Jaimes, > > yes it is possible to change the Green's function. It's best done in > a new module, which can be specified in the config file with option > "G_module". Please also see the example config file for some > documentation. Note that only a single scattering parameter (called > g0) can be used. > > Yes, the optimization plots already account for site amplification. > I think it is best described in the caption of one figure inside the > publication cited on the github page. > > For further questions, may I ask you to write via the mailing list > again (and specify reasonable subject)? I will answer there. Thanks! > > Best, > Tom > > > On 02.05.2018 18:11, Jaimes Caballero, Manuel Alejandro wrote: > > Hi Dr. Eulenfeld, > > Thanks a lot for the help. In the source code it should be > possible to change the 3-D green's function , right? . Also, do > the optimized plots already account for the site amplification > factor?, from what I understand from the code they do but I just > want to verify so. > > Thanks, > > Manuel Jaimes > > From tom.eulenfeld at uni-jena.de Wed Jul 18 10:43:21 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Wed, 18 Jul 2018 10:43:21 +0200 Subject: [yam] python2.7 version and run yam correlate on subsets of data In-Reply-To: References: <268006AE-4182-435A-BC0A-CB3BA09D6E5B@ictp.it> <5b82a500-62f0-a4b3-9e5d-e80f4c935861@uni-jena.de> Message-ID: <40e4ef92-912b-a4c4-6369-c36189707da3@uni-jena.de> Dear Blaž, > Just one more question. If i run yam from command line on a folder with multiple years of data inside, and specifying only one year in the parameter file, it will process just that year, right? And if then i change to another year in the param file, the process will add to the already existing database? Yes, that should work as expected. A better option might be to use the "startdate", "enddate" options in the config of the correlate command. Define a base configuration for correlate and overwrite "startdate" and "enddate" for each call to yam correlate. This could be done with separate config options (aka "based_on") or by just changing the parameter file. I hope you don't mind if I post my answer on the yam mailing list. Others might be interested. Good luck, Tom On 17.07.2018 20:18, Blaž Vičič wrote: > Dear Tom, > Too bad, thanks! I was not really planning to use multiple nodes but just one in order to use multiple processors, so multiprocessing shouldn't be an issue. Ill try to run it on my pc in the meantime , and ask our IT if they can make py3 work on my account. > > Just one more question. If i run yam from command line on a folder with multiple years of data inside, and specifying only one year in the parameter file, it will process just that year, right? And if then i change to another year in the param file, the process will add to the already existing database? > > Thanks, > Cheers, Blaž > >> On 17 Jul 2018, at 17:41, Tom Eulenfeld wrote: >> >> Dear Blaž, >> >> sorry, I do not have a python 2.7 version of the module. >> >> I attempted to create a 2.7 version with 3to2 conversion package. Unfortunately, I am not able to get yam running on python2.7 within a reasonable time. >> >> Anyway I do not know if the software is suitable for a cluster, because it uses the multiprocessing module for parallelization. Or do you want to run yam on each node on a subset of data? >> >> I have not much experience with cluster-based processing. Maybe it would be possible to install python3 in a conda environment in your user directory? >> >> Best regards! >> Tom >> >> >> >>> On 17.07.2018 14:59, Blaž Vičič wrote: >>> Dear Tom. >>> I wanted to use your package YAM on our cluster, since I am dealing with quite a big dataset. Sadly, we still dont use python3 on the cluster, so I guess this is an issue with your module. >>> Do you, by any chance have a py27 versin of the module? >>> Thanks >>> Blaz From blaz.vicic at gmail.com Thu Aug 9 10:04:31 2018 From: blaz.vicic at gmail.com (=?UTF-8?B?Qmxhxb4gVmnEjWnEjQ==?=) Date: Thu, 9 Aug 2018 10:04:31 +0200 Subject: [yam]ZeroDivisionError: division by zero Message-ID: Dear all. I am trying to process some data using yam. Prior to the correlation, I removed the response of the data and downsampled it to 20Hz. when I call yam correlate 1, this is the error I get: (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 0%| | 0/730 [00:00 results = [do_work(task) for task in tasks] File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", line 310, in _prep1 interpolate_options=interpolate_options) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", line 266, in _downsample_and_shift dt = 1 / target_sr ZeroDivisionError: division by zero """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in sys.exit(run_cmdline()) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", line 388, in run_cmdline run(**args) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", line 147, in run run2(command, **args) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", line 211, in run2 yam.commands.start_correlate(io, **args) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", line 101, in start_correlate total=len(tasks)): File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", line 930, in __iter__ for obj in iterable: File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", line 735, in next raise value ZeroDivisionError: division by zero Exception ignored in: Traceback (most recent call last): File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", line 882, in __del__ File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", line 1087, in close File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", line 439, in _decr_instances File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/_weakrefset.py", line 109, in remove KeyError: any idea whats going on here? I changed the data with the originals and added remove response and downsample to config, but the error is same. the example works though. Thanks, blaz -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.eulenfeld at uni-jena.de Thu Aug 9 11:22:23 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Thu, 9 Aug 2018 11:22:23 +0200 Subject: [yam]ZeroDivisionError: division by zero In-Reply-To: References: Message-ID: Dear Blaž, please double-check your configuration. It is possible to set downsample option to null (None) or the target frequency or to delete the option entirely. I can reproduce the reported behavior if I set downsample to false which is interpreted as a target frequency of zero. Did this solve the issue? Best, Tom On 09.08.2018 10:04, Blaž Vičič wrote: > Dear all. > I am trying to process some data using yam. Prior to the correlation, I > removed the response of the data and downsampled it to 20Hz. >  when I call yam correlate 1, this is the error I get: > > (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 > > 0%| | 0/730 [00:00 > """ > > Traceback (most recent call last): > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > line 119, in worker > > result = (True, func(*args, **kwds)) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > line 549, in correlate > > **preprocessing_kwargs) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > line 392, in preprocess > > stream.traces = start_parallel_jobs_inner_loop(stream, do_work, njobs) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > line 26, in start_parallel_jobs_inner_loop > > results = [do_work(task) for task in tasks] > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > line 26, in > > results = [do_work(task) for task in tasks] > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > line 310, in _prep1 > > interpolate_options=interpolate_options) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > line 266, in _downsample_and_shift > > dt = 1 / target_sr > > ZeroDivisionError: division by zero > > """ > > > The above exception was the direct cause of the following exception: > > > Traceback (most recent call last): > > File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in > > sys.exit(run_cmdline()) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > line 388, in run_cmdline > > run(**args) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > line 147, in run > > run2(command, **args) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > line 211, in run2 > > yam.commands.start_correlate(io, **args) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", > line 101, in start_correlate > > total=len(tasks)): > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > line 930, in __iter__ > > for obj in iterable: > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > line 735, in next > > raise value > > ZeroDivisionError: division by zero > > Exception ignored in: [00:00 > > Traceback (most recent call last): > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > line 882, in __del__ > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > line 1087, in close > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > line 439, in _decr_instances > > File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/_weakrefset.py", > line 109, in remove > > KeyError: > > > any idea whats going on here? I changed the data with the originals and > added remove response and downsample to config, but the error is same. > > the example works though. > > Thanks, > blaz > > > _______________________________________________ > seistools mailing list > seistools at listserv.uni-jena.de > https://lserv.uni-jena.de/mailman/listinfo/seistools > From blaz.vicic at gmail.com Thu Aug 9 11:35:29 2018 From: blaz.vicic at gmail.com (=?UTF-8?B?Qmxhxb4gVmnEjWnEjQ==?=) Date: Thu, 9 Aug 2018 11:35:29 +0200 Subject: [yam]ZeroDivisionError: division by zero In-Reply-To: References: Message-ID: Tom, thanks. its exactly what u said... i have set downsample to false. whit removal of this, everything works. cheers On Thu, 9 Aug 2018 at 11:22 Tom Eulenfeld wrote: > Dear Blaž, > > please double-check your configuration. It is possible to set downsample > option to null (None) or the target frequency or to delete the option > entirely. > > I can reproduce the reported behavior if I set downsample to false which > is interpreted as a target frequency of zero. > > Did this solve the issue? > > Best, > Tom > > > > On 09.08.2018 10:04, Blaž Vičič wrote: > > Dear all. > > I am trying to process some data using yam. Prior to the correlation, I > > removed the response of the data and downsampled it to 20Hz. > > when I call yam correlate 1, this is the error I get: > > > > (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 > > > > 0%| | 0/730 [00:00 > > > """ > > > > Traceback (most recent call last): > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > > line 119, in worker > > > > result = (True, func(*args, **kwds)) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > > > line 549, in correlate > > > > **preprocessing_kwargs) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > > > line 392, in preprocess > > > > stream.traces = start_parallel_jobs_inner_loop(stream, do_work, njobs) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > > > line 26, in start_parallel_jobs_inner_loop > > > > results = [do_work(task) for task in tasks] > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > > > line 26, in > > > > results = [do_work(task) for task in tasks] > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > > > line 310, in _prep1 > > > > interpolate_options=interpolate_options) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > > > line 266, in _downsample_and_shift > > > > dt = 1 / target_sr > > > > ZeroDivisionError: division by zero > > > > """ > > > > > > The above exception was the direct cause of the following exception: > > > > > > Traceback (most recent call last): > > > > File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in > > > > sys.exit(run_cmdline()) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > > line 388, in run_cmdline > > > > run(**args) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > > line 147, in run > > > > run2(command, **args) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > > line 211, in run2 > > > > yam.commands.start_correlate(io, **args) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", > > > line 101, in start_correlate > > > > total=len(tasks)): > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > > > line 930, in __iter__ > > > > for obj in iterable: > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > > line 735, in next > > > > raise value > > > > ZeroDivisionError: division by zero > > > > Exception ignored in: > [00:00 > > > > Traceback (most recent call last): > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > > > line 882, in __del__ > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > > > line 1087, in close > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > > > line 439, in _decr_instances > > > > File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/_weakrefset.py", > > line 109, in remove > > > > KeyError: > > > > > > any idea whats going on here? I changed the data with the originals and > > added remove response and downsample to config, but the error is same. > > > > the example works though. > > > > Thanks, > > blaz > > > > > > _______________________________________________ > > seistools mailing list > > seistools at listserv.uni-jena.de > > https://lserv.uni-jena.de/mailman/listinfo/seistools > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blaz.vicic at gmail.com Tue Aug 14 07:30:01 2018 From: blaz.vicic at gmail.com (=?UTF-8?B?Qmxhxb4gVmnEjWnEjQ==?=) Date: Tue, 14 Aug 2018 07:30:01 +0200 Subject: [yam]Metadata Message-ID: Hello again. Another day, another problem. I am trying to process few years of data for a set of stations. I already removed the instrumental response and downsampled the data. The error I get is this one: (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 78%|██████████████████████████████████████████████████████████████████████████████████████████▊ | 2572/3287 [5:37:02<1:33:41, 7.86s/it]multiprocessing.pool.RemoteTraceback: """ Traceback (most recent call last): File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", line 569, in correlate stream2[0].id, datetime=stream2[0].stats.endtime) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", line 430, in get_coordinates metadata = self.get_channel_metadata(seed_id, datetime) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", line 406, in get_channel_metadata raise Exception(msg) Exception: No matching channel metadata found. """ The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in sys.exit(run_cmdline()) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", line 388, in run_cmdline run(**args) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", line 147, in run run2(command, **args) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", line 211, in run2 yam.commands.start_correlate(io, **args) File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", line 101, in start_correlate total=len(tasks)): File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", line 930, in __iter__ for obj in iterable: File "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", line 735, in next raise value Exception: No matching channel metadata found. In the first run I did, the error happened somewhere at the beginning (iteration 200/3000+) so I checked if maybe my miniseeds have wrong sta/chan inside. But they are all what they should be. I even forced the tr.stats.station/chan to be exactly what I wanted. But the error happened again. So I removed the first year of data, but now the error happened again somewhere later in the dataset. Any idea what could be wrong or how to go past this? It would be useful if I would know in which miniseeds to look for the problem. Cheers Blaz -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.eulenfeld at uni-jena.de Tue Aug 14 12:36:58 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Tue, 14 Aug 2018 12:36:58 +0200 Subject: [yam]Metadata In-Reply-To: References: Message-ID: <134b92cf-9dd0-d822-eb11-aea49bdc4edc@uni-jena.de> Hi Blaz, if the metadata in the miniseed is correct, maybe it is a problem with the inventory information? There could be a gap inside the inventory when the station was moved or maintained? Could also be a bug in obspy. I will add some code to catch the exception and display a more meaningful log message. Cheers, Tom On 14.08.2018 07:30, Blaž Vičič wrote: > Hello again. > Another day, another problem. > > I am trying to process few years of data for a set of stations. I > already removed the instrumental response and downsampled the data. The > error I get is this one: > > (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 > > 78%|██████████████████████████████████████████████████████████████████████████████████████████▊ > | 2572/3287 [5:37:02<1:33:41,7.86s/it]multiprocessing.pool.RemoteTraceback: > > """ > > Traceback (most recent call last): > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > line 119, in worker > > result = (True, func(*args, **kwds)) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > line 569, in correlate > > stream2[0].id, datetime=stream2[0].stats.endtime) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", > line 430, in get_coordinates > > metadata = self.get_channel_metadata(seed_id, datetime) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", > line 406, in get_channel_metadata > > raise Exception(msg) > > Exception: No matching channel metadata found. > > """ > > > The above exception was the direct cause of the following exception: > > > Traceback (most recent call last): > > File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in > > sys.exit(run_cmdline()) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > line 388, in run_cmdline > > run(**args) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > line 147, in run > > run2(command, **args) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > line 211, in run2 > > yam.commands.start_correlate(io, **args) > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", > line 101, in start_correlate > > total=len(tasks)): > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > line 930, in __iter__ > > for obj in iterable: > > File > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > line 735, in next > > raise value > > Exception: No matching channel metadata found. > > > In the first run I did, the error happened somewhere at the beginning > (iteration 200/3000+) so I checked if maybe my miniseeds have wrong > sta/chan inside. But they are all what they should be. I even forced the > tr.stats.station/chan to be exactly what I wanted. But the error > happened again. So I removed the first year of data, but now the error > happened again somewhere later in the dataset. Any idea what could be > wrong or how to go past this? It would be useful if I would know in > which miniseeds to look for the problem. > > Cheers > Blaz > > > _______________________________________________ > seistools mailing list > seistools at listserv.uni-jena.de > https://lserv.uni-jena.de/mailman/listinfo/seistools > From blaz.vicic at gmail.com Tue Aug 14 13:11:38 2018 From: blaz.vicic at gmail.com (=?UTF-8?B?Qmxhxb4gVmnEjWnEjQ==?=) Date: Tue, 14 Aug 2018 13:11:38 +0200 Subject: [yam]Metadata In-Reply-To: <134b92cf-9dd0-d822-eb11-aea49bdc4edc@uni-jena.de> References: <134b92cf-9dd0-d822-eb11-aea49bdc4edc@uni-jena.de> Message-ID: I doubt this is the problem of inventory... The miniseeds were pre-procesed and I removed the response using obspy for all the files then used them as an input to yam. staxml files in yam are the same i used for the removal. thanks On Tue, 14 Aug 2018 at 12:36 Tom Eulenfeld wrote: > Hi Blaz, > > if the metadata in the miniseed is correct, maybe it is a problem with > the inventory information? There could be a gap inside the inventory > when the station was moved or maintained? Could also be a bug in obspy. > > I will add some code to catch the exception and display a more > meaningful log message. > > Cheers, > Tom > > > > On 14.08.2018 07:30, Blaž Vičič wrote: > > Hello again. > > Another day, another problem. > > > > I am trying to process few years of data for a set of stations. I > > already removed the instrumental response and downsampled the data. The > > error I get is this one: > > > > (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 > > > > > 78%|██████████████████████████████████████████████████████████████████████████████████████████▊ > > > | 2572/3287 > [5:37:02<1:33:41,7.86s/it]multiprocessing.pool.RemoteTraceback: > > > > """ > > > > Traceback (most recent call last): > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > > line 119, in worker > > > > result = (True, func(*args, **kwds)) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > > > line 569, in correlate > > > > stream2[0].id, datetime=stream2[0].stats.endtime) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", > > > line 430, in get_coordinates > > > > metadata = self.get_channel_metadata(seed_id, datetime) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", > > > line 406, in get_channel_metadata > > > > raise Exception(msg) > > > > Exception: No matching channel metadata found. > > > > """ > > > > > > The above exception was the direct cause of the following exception: > > > > > > Traceback (most recent call last): > > > > File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in > > > > sys.exit(run_cmdline()) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > > line 388, in run_cmdline > > > > run(**args) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > > line 147, in run > > > > run2(command, **args) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > > line 211, in run2 > > > > yam.commands.start_correlate(io, **args) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", > > > line 101, in start_correlate > > > > total=len(tasks)): > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > > > line 930, in __iter__ > > > > for obj in iterable: > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > > line 735, in next > > > > raise value > > > > Exception: No matching channel metadata found. > > > > > > In the first run I did, the error happened somewhere at the beginning > > (iteration 200/3000+) so I checked if maybe my miniseeds have wrong > > sta/chan inside. But they are all what they should be. I even forced the > > tr.stats.station/chan to be exactly what I wanted. But the error > > happened again. So I removed the first year of data, but now the error > > happened again somewhere later in the dataset. Any idea what could be > > wrong or how to go past this? It would be useful if I would know in > > which miniseeds to look for the problem. > > > > Cheers > > Blaz > > > > > > _______________________________________________ > > seistools mailing list > > seistools at listserv.uni-jena.de > > https://lserv.uni-jena.de/mailman/listinfo/seistools > > > _______________________________________________ > seistools mailing list > seistools at listserv.uni-jena.de > https://lserv.uni-jena.de/mailman/listinfo/seistools > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tom.eulenfeld at uni-jena.de Tue Aug 14 15:12:02 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Tue, 14 Aug 2018 15:12:02 +0200 Subject: [yam]Metadata In-Reply-To: References: <134b92cf-9dd0-d822-eb11-aea49bdc4edc@uni-jena.de> Message-ID: <4df7a04a-e518-b1ec-4d59-43ac2762d57e@uni-jena.de> What would be he desired behavior? Print more information and raise an exception. Or just log the exception and continue with the next iteration. I tend to implement the first option. But there is no hurry, because I found the --pdb option which I implemented (but forgot about). You can find out the time yourself by starting yam --pdb correlate -n1 1 and then inspect stream1[0].id and stream1[0].stats.endtime (or stream2[0].id and stream2[0].stats.endtime) when the error occurs. When I think more about it, it might be a problem that I used the endtime and not some time between starttime and endtime. (still assuming there is some kind of gap in the inventory) Cheers, Tom On 14.08.2018 13:11, Blaž Vičič wrote: > I doubt this is the problem of inventory... The miniseeds were > pre-procesed and I removed the response using obspy for all the files > then used them as an input to yam. staxml files in yam are the same i > used for the removal. > > thanks > > On Tue, 14 Aug 2018 at 12:36 Tom Eulenfeld > wrote: > > Hi Blaz, > > if the metadata in the miniseed is correct, maybe it is a problem with > the inventory information? There could be a gap inside the inventory > when the station was moved or maintained? Could also be a bug in obspy. > > I will add some code to catch the exception and display a more > meaningful log message. > > Cheers, > Tom > > > > On 14.08.2018 07:30, Blaž Vičič wrote: > > Hello again. > > Another day, another problem. > > > > I am trying to process few years of data for a set of stations. I > > already removed the instrumental response and downsampled the > data. The > > error I get is this one: > > > > (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 > > > > > 78%|██████████████████████████████████████████████████████████████████████████████████████████▊ > > > | 2572/3287 > [5:37:02<1:33:41,7.86s/it]multiprocessing.pool.RemoteTraceback: > > > > """ > > > > Traceback (most recent call last): > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > > > line 119, in worker > > > > result = (True, func(*args, **kwds)) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > > > line 569, in correlate > > > > stream2[0].id, datetime=stream2[0].stats.endtime) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", > > > line 430, in get_coordinates > > > > metadata = self.get_channel_metadata(seed_id, datetime) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", > > > line 406, in get_channel_metadata > > > > raise Exception(msg) > > > > Exception: No matching channel metadata found. > > > > """ > > > > > > The above exception was the direct cause of the following exception: > > > > > > Traceback (most recent call last): > > > > File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in > > > > > sys.exit(run_cmdline()) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > > line 388, in run_cmdline > > > > run(**args) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > > line 147, in run > > > > run2(command, **args) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > > line 211, in run2 > > > > yam.commands.start_correlate(io, **args) > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", > > > line 101, in start_correlate > > > > total=len(tasks)): > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > > > line 930, in __iter__ > > > > for obj in iterable: > > > > File > > > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > > > line 735, in next > > > > raise value > > > > Exception: No matching channel metadata found. > > > > > > In the first run I did, the error happened somewhere at the > beginning > > (iteration 200/3000+) so I checked if maybe my miniseeds have wrong > > sta/chan inside. But they are all what they should be. I even > forced the > > tr.stats.station/chan to be exactly what I wanted. But the error > > happened again. So I removed the first year of data, but now the > error > > happened again somewhere later in the dataset. Any idea what > could be > > wrong or how to go past this? It would be useful if I would know in > > which miniseeds to look for the problem. > > > > Cheers > > Blaz > > > > > > _______________________________________________ > > seistools mailing list > > seistools at listserv.uni-jena.de > > > https://lserv.uni-jena.de/mailman/listinfo/seistools > > > _______________________________________________ > seistools mailing list > seistools at listserv.uni-jena.de > https://lserv.uni-jena.de/mailman/listinfo/seistools > From tom.eulenfeld at uni-jena.de Tue Aug 14 16:24:47 2018 From: tom.eulenfeld at uni-jena.de (Tom Eulenfeld) Date: Tue, 14 Aug 2018 16:24:47 +0200 Subject: [yam]Metadata In-Reply-To: <4df7a04a-e518-b1ec-4d59-43ac2762d57e@uni-jena.de> References: <134b92cf-9dd0-d822-eb11-aea49bdc4edc@uni-jena.de> <4df7a04a-e518-b1ec-4d59-43ac2762d57e@uni-jena.de> Message-ID: Hi Blaz, I committed a more verbose exception. You can try the dev version of Yam, e.g. conda uninstall yam pip install https://github.com/trichter/yam/archive/master.zip Cheers, Tom On 14.08.2018 15:12, Tom Eulenfeld wrote: > What would be he desired behavior? Print more information and raise an > exception. Or just log the exception and continue with the next > iteration. I tend to implement the first option. > > But there is no hurry, because I found the --pdb option which I > implemented (but forgot about). > You can find out the time yourself by starting > > yam --pdb correlate -n1 1 > > and then inspect stream1[0].id and stream1[0].stats.endtime > (or stream2[0].id and stream2[0].stats.endtime) > when the error occurs. > > When I think more about it, it might be a problem that I used the > endtime and not some time between starttime and endtime. (still assuming > there is some kind of gap in the inventory) > > Cheers, > Tom > > > > On 14.08.2018 13:11, Blaž Vičič wrote: >> I doubt this is the problem of inventory... The miniseeds were >> pre-procesed and I removed the response using obspy for all the files >> then used them as an input to yam. staxml files in yam are the same i >> used for the removal. >> >> thanks >> >> On Tue, 14 Aug 2018 at 12:36 Tom Eulenfeld > > wrote: >> >>     Hi Blaz, >> >>     if the metadata in the miniseed is correct, maybe it is a problem >> with >>     the inventory information? There could be a gap inside the inventory >>     when the station was moved or maintained? Could also be a bug in >> obspy. >> >>     I will add some code to catch the exception and display a more >>     meaningful log message. >> >>     Cheers, >>     Tom >> >> >> >>     On 14.08.2018 07:30, Blaž Vičič wrote: >>      > Hello again. >>      > Another day, another problem. >>      > >>      > I am trying to process few years of data for a set of stations. I >>      > already removed the instrumental response and downsampled the >>     data. The >>      > error I get is this one: >>      > >>      > (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 >>      > >>      > >> >> 78%|██████████████████████████████████████████████████████████████████████████████████████████▊ >> >> >>      > | 2572/3287 >>     [5:37:02<1:33:41,7.86s/it]multiprocessing.pool.RemoteTraceback: >>      > >>      > """ >>      > >>      > Traceback (most recent call last): >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", >> >> >>      > line 119, in worker >>      > >>      > result = (True, func(*args, **kwds)) >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", >> >> >>      > line 569, in correlate >>      > >>      > stream2[0].id, datetime=stream2[0].stats.endtime) >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", >> >> >>      > line 430, in get_coordinates >>      > >>      > metadata = self.get_channel_metadata(seed_id, datetime) >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", >> >> >>      > line 406, in get_channel_metadata >>      > >>      > raise Exception(msg) >>      > >>      > Exception: No matching channel metadata found. >>      > >>      > """ >>      > >>      > >>      > The above exception was the direct cause of the following >> exception: >>      > >>      > >>      > Traceback (most recent call last): >>      > >>      > File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in >>     >>      > >>      > sys.exit(run_cmdline()) >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", >> >> >>      > line 388, in run_cmdline >>      > >>      > run(**args) >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", >> >> >>      > line 147, in run >>      > >>      > run2(command, **args) >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", >> >> >>      > line 211, in run2 >>      > >>      > yam.commands.start_correlate(io, **args) >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", >> >> >>      > line 101, in start_correlate >>      > >>      > total=len(tasks)): >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", >> >> >>      > line 930, in __iter__ >>      > >>      > for obj in iterable: >>      > >>      > File >>      > >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", >> >> >>      > line 735, in next >>      > >>      > raise value >>      > >>      > Exception: No matching channel metadata found. >>      > >>      > >>      > In the first run I did, the error happened somewhere at the >>     beginning >>      > (iteration 200/3000+) so I checked if maybe my miniseeds have >> wrong >>      > sta/chan inside. But they are all what they should be. I even >>     forced the >>      > tr.stats.station/chan to be exactly what I wanted. But the error >>      > happened again. So I removed the first year of data, but now the >>     error >>      > happened again somewhere later in the dataset. Any idea what >>     could be >>      > wrong or how to go past this? It would be useful if I would >> know in >>      > which miniseeds to look for the problem. >>      > >>      > Cheers >>      > Blaz >>      > >>      > >>      > _______________________________________________ >>      > seistools mailing list >>      > seistools at listserv.uni-jena.de >>     >>      > https://lserv.uni-jena.de/mailman/listinfo/seistools >>      > >>     _______________________________________________ >>     seistools mailing list >>     seistools at listserv.uni-jena.de >> >>     https://lserv.uni-jena.de/mailman/listinfo/seistools >> > _______________________________________________ > seistools mailing list > seistools at listserv.uni-jena.de > https://lserv.uni-jena.de/mailman/listinfo/seistools From blaz.vicic at gmail.com Tue Aug 14 17:42:33 2018 From: blaz.vicic at gmail.com (=?UTF-8?B?Qmxhxb4gVmnEjWnEjQ==?=) Date: Tue, 14 Aug 2018 17:42:33 +0200 Subject: [yam]Metadata In-Reply-To: References: <134b92cf-9dd0-d822-eb11-aea49bdc4edc@uni-jena.de> <4df7a04a-e518-b1ec-4d59-43ac2762d57e@uni-jena.de> Message-ID: Thanks! Ill try on Thursday. Hopefully Ill find the problematic data. Cheers On Tue, Aug 14, 2018, 16:24 Tom Eulenfeld wrote: > Hi Blaz, I committed a more verbose exception. > You can try the dev version of Yam, e.g. > > conda uninstall yam > pip install https://github.com/trichter/yam/archive/master.zip > > Cheers, > Tom > > > > On 14.08.2018 15:12, Tom Eulenfeld wrote: > > What would be he desired behavior? Print more information and raise an > > exception. Or just log the exception and continue with the next > > iteration. I tend to implement the first option. > > > > But there is no hurry, because I found the --pdb option which I > > implemented (but forgot about). > > You can find out the time yourself by starting > > > > yam --pdb correlate -n1 1 > > > > and then inspect stream1[0].id and stream1[0].stats.endtime > > (or stream2[0].id and stream2[0].stats.endtime) > > when the error occurs. > > > > When I think more about it, it might be a problem that I used the > > endtime and not some time between starttime and endtime. (still assuming > > there is some kind of gap in the inventory) > > > > Cheers, > > Tom > > > > > > > > On 14.08.2018 13:11, Blaž Vičič wrote: > >> I doubt this is the problem of inventory... The miniseeds were > >> pre-procesed and I removed the response using obspy for all the files > >> then used them as an input to yam. staxml files in yam are the same i > >> used for the removal. > >> > >> thanks > >> > >> On Tue, 14 Aug 2018 at 12:36 Tom Eulenfeld >> > wrote: > >> > >> Hi Blaz, > >> > >> if the metadata in the miniseed is correct, maybe it is a problem > >> with > >> the inventory information? There could be a gap inside the inventory > >> when the station was moved or maintained? Could also be a bug in > >> obspy. > >> > >> I will add some code to catch the exception and display a more > >> meaningful log message. > >> > >> Cheers, > >> Tom > >> > >> > >> > >> On 14.08.2018 07:30, Blaž Vičič wrote: > >> > Hello again. > >> > Another day, another problem. > >> > > >> > I am trying to process few years of data for a set of stations. I > >> > already removed the instrumental response and downsampled the > >> data. The > >> > error I get is this one: > >> > > >> > (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 > >> > > >> > > >> > >> > 78%|██████████████████████████████████████████████████████████████████████████████████████████▊ > > >> > >> > >> > | 2572/3287 > >> [5:37:02<1:33:41,7.86s/it]multiprocessing.pool.RemoteTraceback: > >> > > >> > """ > >> > > >> > Traceback (most recent call last): > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > >> > >> > >> > line 119, in worker > >> > > >> > result = (True, func(*args, **kwds)) > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", > > >> > >> > >> > line 569, in correlate > >> > > >> > stream2[0].id, datetime=stream2[0].stats.endtime) > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", > > >> > >> > >> > line 430, in get_coordinates > >> > > >> > metadata = self.get_channel_metadata(seed_id, datetime) > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", > > >> > >> > >> > line 406, in get_channel_metadata > >> > > >> > raise Exception(msg) > >> > > >> > Exception: No matching channel metadata found. > >> > > >> > """ > >> > > >> > > >> > The above exception was the direct cause of the following > >> exception: > >> > > >> > > >> > Traceback (most recent call last): > >> > > >> > File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in > >> > >> > > >> > sys.exit(run_cmdline()) > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > >> > >> > >> > line 388, in run_cmdline > >> > > >> > run(**args) > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > >> > >> > >> > line 147, in run > >> > > >> > run2(command, **args) > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", > > >> > >> > >> > line 211, in run2 > >> > > >> > yam.commands.start_correlate(io, **args) > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", > > >> > >> > >> > line 101, in start_correlate > >> > > >> > total=len(tasks)): > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", > > >> > >> > >> > line 930, in __iter__ > >> > > >> > for obj in iterable: > >> > > >> > File > >> > > >> > >> > "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", > >> > >> > >> > line 735, in next > >> > > >> > raise value > >> > > >> > Exception: No matching channel metadata found. > >> > > >> > > >> > In the first run I did, the error happened somewhere at the > >> beginning > >> > (iteration 200/3000+) so I checked if maybe my miniseeds have > >> wrong > >> > sta/chan inside. But they are all what they should be. I even > >> forced the > >> > tr.stats.station/chan to be exactly what I wanted. But the error > >> > happened again. So I removed the first year of data, but now the > >> error > >> > happened again somewhere later in the dataset. Any idea what > >> could be > >> > wrong or how to go past this? It would be useful if I would > >> know in > >> > which miniseeds to look for the problem. > >> > > >> > Cheers > >> > Blaz > >> > > >> > > >> > _______________________________________________ > >> > seistools mailing list > >> > seistools at listserv.uni-jena.de > >> > >> > https://lserv.uni-jena.de/mailman/listinfo/seistools > >> > > >> _______________________________________________ > >> seistools mailing list > >> seistools at listserv.uni-jena.de > >> > >> https://lserv.uni-jena.de/mailman/listinfo/seistools > >> > > _______________________________________________ > > seistools mailing list > > seistools at listserv.uni-jena.de > > https://lserv.uni-jena.de/mailman/listinfo/seistools > _______________________________________________ > seistools mailing list > seistools at listserv.uni-jena.de > https://lserv.uni-jena.de/mailman/listinfo/seistools > -------------- next part -------------- An HTML attachment was scrubbed... URL: From blaz.vicic at gmail.com Thu Aug 16 11:46:56 2018 From: blaz.vicic at gmail.com (=?UTF-8?B?Qmxhxb4gVmnEjWnEjQ==?=) Date: Thu, 16 Aug 2018 11:46:56 +0200 Subject: [yam]Metadata In-Reply-To: References: <134b92cf-9dd0-d822-eb11-aea49bdc4edc@uni-jena.de> <4df7a04a-e518-b1ec-4d59-43ac2762d57e@uni-jena.de> Message-ID: Thank you very much! this makes everything much more convenient! There was a one day gap in the staXML although the day was not missing. Interestingly, all the years of this network are slightly strange, since with the new sensors, also polarities changes ... cheers On Tue, 14 Aug 2018 at 17:42 Blaž Vičič wrote: > Thanks! Ill try on Thursday. Hopefully Ill find the problematic data. > > Cheers > > > On Tue, Aug 14, 2018, 16:24 Tom Eulenfeld > wrote: > >> Hi Blaz, I committed a more verbose exception. >> You can try the dev version of Yam, e.g. >> >> conda uninstall yam >> pip install https://github.com/trichter/yam/archive/master.zip >> >> Cheers, >> Tom >> >> >> >> On 14.08.2018 15:12, Tom Eulenfeld wrote: >> > What would be he desired behavior? Print more information and raise an >> > exception. Or just log the exception and continue with the next >> > iteration. I tend to implement the first option. >> > >> > But there is no hurry, because I found the --pdb option which I >> > implemented (but forgot about). >> > You can find out the time yourself by starting >> > >> > yam --pdb correlate -n1 1 >> > >> > and then inspect stream1[0].id and stream1[0].stats.endtime >> > (or stream2[0].id and stream2[0].stats.endtime) >> > when the error occurs. >> > >> > When I think more about it, it might be a problem that I used the >> > endtime and not some time between starttime and endtime. (still >> assuming >> > there is some kind of gap in the inventory) >> > >> > Cheers, >> > Tom >> > >> > >> > >> > On 14.08.2018 13:11, Blaž Vičič wrote: >> >> I doubt this is the problem of inventory... The miniseeds were >> >> pre-procesed and I removed the response using obspy for all the files >> >> then used them as an input to yam. staxml files in yam are the same i >> >> used for the removal. >> >> >> >> thanks >> >> >> >> On Tue, 14 Aug 2018 at 12:36 Tom Eulenfeld > >> > wrote: >> >> >> >> Hi Blaz, >> >> >> >> if the metadata in the miniseed is correct, maybe it is a problem >> >> with >> >> the inventory information? There could be a gap inside the >> inventory >> >> when the station was moved or maintained? Could also be a bug in >> >> obspy. >> >> >> >> I will add some code to catch the exception and display a more >> >> meaningful log message. >> >> >> >> Cheers, >> >> Tom >> >> >> >> >> >> >> >> On 14.08.2018 07:30, Blaž Vičič wrote: >> >> > Hello again. >> >> > Another day, another problem. >> >> > >> >> > I am trying to process few years of data for a set of stations. >> I >> >> > already removed the instrumental response and downsampled the >> >> data. The >> >> > error I get is this one: >> >> > >> >> > (obspy) pb-vicic:proc_2 bvicic$ yam correlate 1 >> >> > >> >> > >> >> >> >> >> 78%|██████████████████████████████████████████████████████████████████████████████████████████▊ >> >> >> >> >> >> >> > | 2572/3287 >> >> [5:37:02<1:33:41,7.86s/it]multiprocessing.pool.RemoteTraceback: >> >> > >> >> > """ >> >> > >> >> > Traceback (most recent call last): >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", >> >> >> >> >> >> > line 119, in worker >> >> > >> >> > result = (True, func(*args, **kwds)) >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/correlate.py", >> >> >> >> >> >> >> > line 569, in correlate >> >> > >> >> > stream2[0].id, datetime=stream2[0].stats.endtime) >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", >> >> >> >> >> >> >> > line 430, in get_coordinates >> >> > >> >> > metadata = self.get_channel_metadata(seed_id, datetime) >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/obspy/core/inventory/inventory.py", >> >> >> >> >> >> >> > line 406, in get_channel_metadata >> >> > >> >> > raise Exception(msg) >> >> > >> >> > Exception: No matching channel metadata found. >> >> > >> >> > """ >> >> > >> >> > >> >> > The above exception was the direct cause of the following >> >> exception: >> >> > >> >> > >> >> > Traceback (most recent call last): >> >> > >> >> > File "/Users/bvicic/anaconda3/envs/obspy/bin/yam", line 11, in >> >> >> >> > >> >> > sys.exit(run_cmdline()) >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", >> >> >> >> >> >> >> > line 388, in run_cmdline >> >> > >> >> > run(**args) >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", >> >> >> >> >> >> >> > line 147, in run >> >> > >> >> > run2(command, **args) >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/main.py", >> >> >> >> >> >> >> > line 211, in run2 >> >> > >> >> > yam.commands.start_correlate(io, **args) >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/yam/commands.py", >> >> >> >> >> >> >> > line 101, in start_correlate >> >> > >> >> > total=len(tasks)): >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/site-packages/tqdm/_tqdm.py", >> >> >> >> >> >> >> > line 930, in __iter__ >> >> > >> >> > for obj in iterable: >> >> > >> >> > File >> >> > >> >> >> >> >> "/Users/bvicic/anaconda3/envs/obspy/lib/python3.6/multiprocessing/pool.py", >> >> >> >> >> >> > line 735, in next >> >> > >> >> > raise value >> >> > >> >> > Exception: No matching channel metadata found. >> >> > >> >> > >> >> > In the first run I did, the error happened somewhere at the >> >> beginning >> >> > (iteration 200/3000+) so I checked if maybe my miniseeds have >> >> wrong >> >> > sta/chan inside. But they are all what they should be. I even >> >> forced the >> >> > tr.stats.station/chan to be exactly what I wanted. But the error >> >> > happened again. So I removed the first year of data, but now the >> >> error >> >> > happened again somewhere later in the dataset. Any idea what >> >> could be >> >> > wrong or how to go past this? It would be useful if I would >> >> know in >> >> > which miniseeds to look for the problem. >> >> > >> >> > Cheers >> >> > Blaz >> >> > >> >> > >> >> > _______________________________________________ >> >> > seistools mailing list >> >> > seistools at listserv.uni-jena.de >> >> >> >> > https://lserv.uni-jena.de/mailman/listinfo/seistools >> >> > >> >> _______________________________________________ >> >> seistools mailing list >> >> seistools at listserv.uni-jena.de >> >> >> >> https://lserv.uni-jena.de/mailman/listinfo/seistools >> >> >> > _______________________________________________ >> > seistools mailing list >> > seistools at listserv.uni-jena.de >> > https://lserv.uni-jena.de/mailman/listinfo/seistools >> _______________________________________________ >> seistools mailing list >> seistools at listserv.uni-jena.de >> https://lserv.uni-jena.de/mailman/listinfo/seistools >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: