Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

windows docker logs are empty #5979

Closed
ilyaiqoqo opened this issue Jul 18, 2019 · 8 comments
Closed

windows docker logs are empty #5979

ilyaiqoqo opened this issue Jul 18, 2019 · 8 comments

Comments

@ilyaiqoqo
Copy link
Contributor

ilyaiqoqo commented Jul 18, 2019

Nomad version

Nomad v0.9.4-dev (87d716e)

Operating system and Environment details

windows 10 1809
Docker version 19.03.0-rc2, build f97efcc

Issue

Allocation logs are empty in latest nomad version. but it works fine on v0.9.3 tag

Reproduction steps

run job on windows

  1. Run nomad agent -dev
  2. Submit job that prints to logs(similar to one below)

expected result:

C:\> .\nomad-latest.exe alloc logs cf4120be sleepTask
Python 3.7.3

actual result is empty log and exception in the log

2019-07-18T13:38:31.239+0100 [ERROR] client.driver_mgr.docker.docker_logger.nomad-latest.exe: log streaming ended with terminal error: driver=docker @module=docker_logger error="open //./pipe/sleepTask-4f1fcd22.stdout: The system cannot find the file specified." timestamp=2019-07-18T13:38:31.239+0100

Job file (if appropriate)

job "job49" {
  datacenters = ["dc1"]

  region = "global"

  type = "batch"

  group "epictaskgroup1" {
    reschedule {
      attempts = 0
      unlimited = false
    }

    restart {
      attempts = 0
    }

    task "sleepTask" {
      constraint {
        attribute = "${attr.kernel.name}"
        value     = "windows"
      }

      driver = "docker"

      config {
        work_dir = "/local"
        image = "python:3-windowsservercore"
        command = "python"
        args = ["--version"]
      }

      kill_timeout = "10000s"
    }
  }
}

Nomad Client logs (if appropriate)

PS C:> .\nomad-latest.exe agent -dev
==> No configuration files loaded
==> Starting Nomad agent...
==> Nomad agent configuration:

   Advertise Addrs: HTTP: 127.0.0.1:4646; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
        Bind Addrs: HTTP: 127.0.0.1:4646; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
            Client: true
         Log Level: DEBUG
            Region: global (DC: dc1)
            Server: true
           Version: 0.9.4-dev

==> Nomad agent started! Log data will stream in below:

    2019-07-18T13:34:54.615+0100 [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=
    2019-07-18T13:34:54.615+0100 [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=
    2019-07-18T13:34:54.615+0100 [INFO ] agent: detected plugin: name=mock_driver type=driver plugin_version=0.1.0
    2019-07-18T13:34:54.615+0100 [INFO ] agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0
    2019-07-18T13:34:54.615+0100 [INFO ] agent: detected plugin: name=exec type=driver plugin_version=0.1.0
    2019-07-18T13:34:54.615+0100 [INFO ] agent: detected plugin: name=qemu type=driver plugin_version=0.1.0
    2019-07-18T13:34:54.615+0100 [INFO ] agent: detected plugin: name=java type=driver plugin_version=0.1.0
    2019-07-18T13:34:54.615+0100 [INFO ] agent: detected plugin: name=docker type=driver plugin_version=0.1.0
    2019-07-18T13:34:54.616+0100 [INFO ] nomad: raft: Initial configuration (index=1): [{Suffrage:Voter ID:127.0.0.1:4647 Address:127.0.0.1:4647}]
    2019-07-18T13:34:54.616+0100 [INFO ] nomad: raft: Node at 127.0.0.1:4647 [Follower] entering Follower state (Leader: "")
    2019-07-18T13:34:54.618+0100 [INFO ] nomad: serf: EventMemberJoin: DESKTOP-7GC9VVB.global 127.0.0.1
    2019-07-18T13:34:54.618+0100 [INFO ] nomad: starting scheduling worker(s): num_workers=8 schedulers="[service batch system _core]"
    2019-07-18T13:34:54.618+0100 [INFO ] nomad: adding server: server="DESKTOP-7GC9VVB.global (Addr: 127.0.0.1:4647) (DC: dc1)"
    2019-07-18T13:34:54.618+0100 [DEBUG] nomad: lost contact with Nomad quorum, falling back to Consul for server list
    2019-07-18T13:34:54.618+0100 [INFO ] client: using state directory: state_dir=C:\Users\ilya\AppData\Local\Temp\NomadClient530425495
    2019-07-18T13:34:54.619+0100 [INFO ] client: using alloc directory: alloc_dir=C:\Users\ilya\AppData\Local\Temp\NomadClient342866186
    2019-07-18T13:34:54.647+0100 [DEBUG] client.fingerprint_mgr: built-in fingerprints: fingerprinters="[arch consul cpu host memory network nomad signal storage vault env_aws env_gce]"
    2019-07-18T13:34:55.620+0100 [ERROR] nomad: error looking up Nomad servers in Consul: error="server.nomad: unable to query Consul datacenters: Get http://127.0.0.1:8500/v1/catalog/datacenters: dial tcp 127.0.0.1:8500: connectex: No connection could be made because the target machine actively refused it."
    2019-07-18T13:34:55.649+0100 [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=consul period=15s
    2019-07-18T13:34:55.922+0100 [WARN ] nomad: raft: Heartbeat timeout from "" reached, starting election
    2019-07-18T13:34:55.922+0100 [INFO ] nomad: raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state in term 2
    2019-07-18T13:34:55.922+0100 [DEBUG] nomad: raft: Votes needed: 1
    2019-07-18T13:34:55.922+0100 [DEBUG] nomad: raft: Vote granted from 127.0.0.1:4647 in term 2. Tally: 1
    2019-07-18T13:34:55.922+0100 [INFO ] nomad: raft: Election won. Tally: 1
    2019-07-18T13:34:55.922+0100 [INFO ] nomad: raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
    2019-07-18T13:34:55.922+0100 [INFO ] nomad: cluster leadership acquired
    2019-07-18T13:34:56.713+0100 [DEBUG] client.fingerprint_mgr.cpu: detected cpu frequency: MHz=2112
    2019-07-18T13:34:56.713+0100 [DEBUG] client.fingerprint_mgr.cpu: detected core count: cores=8
    2019-07-18T13:34:57.936+0100 [WARN ] client.fingerprint_mgr.network: unable to parse LinkSpeed value: value="526.5 Mbps"
    2019-07-18T13:34:57.936+0100 [DEBUG] client.fingerprint_mgr.network: link speed could not be detected and no speed specified by user, falling back to default speed: mbits=1000
    2019-07-18T13:34:57.944+0100 [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=WiFi IP=192.168.11.121
    2019-07-18T13:34:57.945+0100 [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s
    2019-07-18T13:34:57.945+0100 [DEBUG] client.fingerprint_mgr.env_aws: error querying AWS Metadata URL, skipping
    2019-07-18T13:34:57.945+0100 [DEBUG] client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type error="Get http://169.254.169.254/computeMetadata/v1/instance/machine-type: dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network."
    2019-07-18T13:34:57.945+0100 [DEBUG] client.fingerprint_mgr.env_gce: error querying GCE Metadata URL, skipping
    2019-07-18T13:34:57.945+0100 [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs="[arch cpu host network nomad signal storage]"
    2019-07-18T13:34:57.945+0100 [INFO ] client.plugin: starting plugin manager: plugin-type=driver
    2019-07-18T13:34:57.945+0100 [INFO ] client.plugin: starting plugin manager: plugin-type=device
    2019-07-18T13:34:57.945+0100 [DEBUG] client.device_mgr: exiting since there are no device plugins
    2019-07-18T13:34:57.945+0100 [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver
    2019-07-18T13:34:57.945+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=raw_exec health=healthy description=Healthy
    2019-07-18T13:34:57.945+0100 [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device
    2019-07-18T13:34:57.945+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=exec health=undetected description="exec driver unsupported on client OS"
    2019-07-18T13:34:57.945+0100 [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=device
    2019-07-18T13:34:57.945+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=mock_driver health=healthy description=Healthy
    2019-07-18T13:34:57.946+0100 [DEBUG] client.server_mgr: new server list: new_servers=127.0.0.1:4647,127.0.0.1:4647 old_servers=
    2019-07-18T13:34:57.952+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=docker health=healthy description=Healthy
    2019-07-18T13:34:57.956+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=qemu health=undetected description=
    2019-07-18T13:34:57.957+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=java health=undetected description=
    2019-07-18T13:34:57.957+0100 [DEBUG] client.driver_mgr: detected drivers: drivers="map[healthy:[raw_exec mock_driver docker] undetected:[exec qemu java]]"
    2019-07-18T13:34:57.957+0100 [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=driver
    2019-07-18T13:34:57.957+0100 [INFO ] client: started client: node_id=82c35105-5615-e3d4-415d-e96113d90f85
    2019-07-18T13:34:57.957+0100 [DEBUG] client: updated allocations: index=1 total=0 pulled=0 filtered=0
    2019-07-18T13:34:57.957+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=0
    2019-07-18T13:34:57.957+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=0 errors=0
    2019-07-18T13:34:57.958+0100 [INFO ] client: node registration complete
    2019-07-18T13:34:57.960+0100 [DEBUG] client: state updated: node_status=ready
    2019-07-18T13:34:57.960+0100 [DEBUG] client.server_mgr: new server list: new_servers=127.0.0.1:4647 old_servers=127.0.0.1:4647,127.0.0.1:4647
    2019-07-18T13:34:58.958+0100 [DEBUG] client: state changed, updating node and re-registering
    2019-07-18T13:34:58.961+0100 [INFO ] client: node registration complete
    2019-07-18T13:37:17.933+0100 [DEBUG] worker: dequeued evaluation: eval_id=61213792-2572-c747-a28a-7575e839e1ff
    2019-07-18T13:37:17.933+0100 [DEBUG] http: request complete: method=PUT path=/v1/jobs?region=global duration=999.8µs
    2019-07-18T13:37:17.934+0100 [DEBUG] worker.batch_sched: reconciled current state with desired state: eval_id=61213792-2572-c747-a28a-7575e839e1ff job_id=job49 namespace=default results="Total changes: (place 1) (destructive 0) (inplace 0) (stop 0)
Desired Changes for "epictaskgroup1": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)"
    2019-07-18T13:37:17.935+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/61213792-2572-c747-a28a-7575e839e1ff?region=global duration=0s
    2019-07-18T13:37:17.935+0100 [DEBUG] worker: created evaluation: eval="<Eval "f596d378-7e58-b74d-8052-49224a276de5" JobID: "job49" Namespace: "default">"
    2019-07-18T13:37:17.936+0100 [DEBUG] worker.batch_sched: failed to place all allocations, blocked eval created: eval_id=61213792-2572-c747-a28a-7575e839e1ff job_id=job49 namespace=default blocked_eval_id=f596d378-7e58-b74d-8052-49224a276de5
    2019-07-18T13:37:17.936+0100 [DEBUG] worker.batch_sched: setting eval status: eval_id=61213792-2572-c747-a28a-7575e839e1ff job_id=job49 namespace=default status=complete
    2019-07-18T13:37:17.937+0100 [DEBUG] worker: updated evaluation: eval="<Eval "61213792-2572-c747-a28a-7575e839e1ff" JobID: "job49" Namespace: "default">"
    2019-07-18T13:37:17.938+0100 [DEBUG] worker: ack evaluation: eval_id=61213792-2572-c747-a28a-7575e839e1ff
    2019-07-18T13:37:17.939+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/61213792-2572-c747-a28a-7575e839e1ff/allocations?region=global duration=2.0001ms
    2019-07-18T13:37:18.942+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/61213792-2572-c747-a28a-7575e839e1ff?region=global duration=981.1µs
    2019-07-18T13:37:18.948+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/61213792-2572-c747-a28a-7575e839e1ff/allocations?region=global duration=0s
    2019-07-18T13:37:48.058+0100 [DEBUG] http: request complete: method=GET path=/v1/jobs?prefix=job49 duration=0s
    2019-07-18T13:37:48.059+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49 duration=0s
    2019-07-18T13:37:48.065+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/allocations?all=false duration=0s
    2019-07-18T13:37:48.066+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/evaluations duration=0s
    2019-07-18T13:37:48.067+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/deployment duration=0s
    2019-07-18T13:37:48.069+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/summary duration=0s
    2019-07-18T13:38:14.330+0100 [DEBUG] http: request complete: method=GET path=/v1/jobs?prefix=job49 duration=0s
    2019-07-18T13:38:14.331+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49 duration=0s
    2019-07-18T13:38:14.340+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/allocations?all=false duration=0s
    2019-07-18T13:38:14.341+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/evaluations duration=0s
    2019-07-18T13:38:14.349+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/deployment duration=0s
    2019-07-18T13:38:14.357+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/summary duration=0s
    2019-07-18T13:38:21.311+0100 [DEBUG] worker: dequeued evaluation: eval_id=af1b39cd-42b6-1ed8-3a44-8758e9a79ddf
    2019-07-18T13:38:21.311+0100 [DEBUG] http: request complete: method=PUT path=/v1/jobs?region=global duration=0s
    2019-07-18T13:38:21.312+0100 [DEBUG] worker.batch_sched: reconciled current state with desired state: eval_id=af1b39cd-42b6-1ed8-3a44-8758e9a79ddf job_id=job49 namespace=default results="Total changes: (place 1) (destructive 0) (inplace 0) (stop 0)
Desired Changes for "epictaskgroup1": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)"
    2019-07-18T13:38:21.313+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/af1b39cd-42b6-1ed8-3a44-8758e9a79ddf?region=global duration=0s
    2019-07-18T13:38:21.314+0100 [DEBUG] worker: submitted plan for evaluation: eval_id=af1b39cd-42b6-1ed8-3a44-8758e9a79ddf
    2019-07-18T13:38:21.315+0100 [DEBUG] worker.batch_sched: setting eval status: eval_id=af1b39cd-42b6-1ed8-3a44-8758e9a79ddf job_id=job49 namespace=default status=complete
    2019-07-18T13:38:21.315+0100 [DEBUG] client: updated allocations: index=18 total=1 pulled=1 filtered=0
    2019-07-18T13:38:21.316+0100 [DEBUG] worker: updated evaluation: eval="<Eval "af1b39cd-42b6-1ed8-3a44-8758e9a79ddf" JobID: "job49" Namespace: "default">"
    2019-07-18T13:38:21.316+0100 [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=0
    2019-07-18T13:38:21.317+0100 [DEBUG] worker: ack evaluation: eval_id=af1b39cd-42b6-1ed8-3a44-8758e9a79ddf
    2019-07-18T13:38:21.317+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/af1b39cd-42b6-1ed8-3a44-8758e9a79ddf/allocations?region=global duration=997.5µs
    2019-07-18T13:38:21.317+0100 [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0
    2019-07-18T13:38:21.319+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask path=C:\nomad-latest.exe args="[C:\nomad-latest.exe logmon]"
    2019-07-18T13:38:21.332+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask path=C:\nomad-latest.exe pid=2528
    2019-07-18T13:38:21.332+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask path=C:\nomad-latest.exe
    2019-07-18T13:38:21.407+0100 [DEBUG] client: updated allocations: index=20 total=1 pulled=0 filtered=1
    2019-07-18T13:38:21.408+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-18T13:38:21.408+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-18T13:38:21.647+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask version=2
    2019-07-18T13:38:21.647+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: plugin address: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask @module=logmon address=127.0.0.1:10000 network=tcp timestamp=2019-07-18T13:38:21.647+0100
    2019-07-18T13:38:21.649+0100 [INFO ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: opening fifo: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask @module=logmon path=//./pipe/sleepTask-4f1fcd22.stdout timestamp=2019-07-18T13:38:21.649+0100
    2019-07-18T13:38:21.650+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.artifacts: downloading artifact: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask artifact=https://dev.iqoqo.co/api/v1/files/5d1b2d67169a29000d7af05d/simple.py
    2019-07-18T13:38:21.650+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: failed to open fifo: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask @module=logmon error="failed to open fifo listener: open //./pipe/sleepTask-4f1fcd22.stdout: The system cannot find the file specified." timestamp=2019-07-18T13:38:21.650+0100
    2019-07-18T13:38:21.653+0100 [INFO ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: opening fifo: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask path=//./pipe/sleepTask-4f1fcd22.stderr @module=logmon timestamp=2019-07-18T13:38:21.650+0100
    2019-07-18T13:38:21.654+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: failed to open fifo: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask error="failed to open fifo listener: open //./pipe/sleepTask-4f1fcd22.stderr: The system cannot find the file specified." @module=logmon timestamp=2019-07-18T13:38:21.650+0100
    2019-07-18T13:38:21.809+0100 [DEBUG] client: updated allocations: index=21 total=1 pulled=0 filtered=1
    2019-07-18T13:38:21.810+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-18T13:38:21.812+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-18T13:38:22.329+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/af1b39cd-42b6-1ed8-3a44-8758e9a79ddf?region=global duration=1.0185ms
    2019-07-18T13:38:22.331+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/af1b39cd-42b6-1ed8-3a44-8758e9a79ddf/allocations?region=global duration=0s
    2019-07-18T13:38:23.952+0100 [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=python:3-windowsservercore image_id=sha256:22a423a5db36a909fa52de1e21581765ca7de8687036d412c438904a19d762d9 references=1
    2019-07-18T13:38:23.952+0100 [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=sleepTask memory=314572800 cpu_shares=100 cpu_quota=0 cpu_period=0
    2019-07-18T13:38:23.953+0100 [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=sleepTask binds="[]string{"C:\\Users\\ilya\\AppData\\Local\\Temp\\NomadClient342866186\\daeddbc0-b5fc-badf-3534-4f8a393f63d6\\alloc:c:\\alloc", "C:\\Users\\ilya\\AppData\\Local\\Temp\\NomadClient342866186\\daeddbc0-b5fc-badf-3534-4f8a393f63d6\\sleepTask\\local:c:\\local", "C:\\Users\\ilya\\AppData\\Local\\Temp\\NomadClient342866186\\daeddbc0-b5fc-badf-3534-4f8a393f63d6\\sleepTask\\secrets:c:\\secrets"}"
    2019-07-18T13:38:23.955+0100 [DEBUG] client.driver_mgr.docker: networking mode not specified; using default: driver=docker task_name=sleepTask network_mode=nat
    2019-07-18T13:38:23.956+0100 [DEBUG] client.driver_mgr.docker: no network interfaces are available: driver=docker task_name=sleepTask
    2019-07-18T13:38:23.957+0100 [DEBUG] client.driver_mgr.docker: setting container startup command: driver=docker task_name=sleepTask command="python simple.py"
    2019-07-18T13:38:23.958+0100 [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=sleepTask container_name=sleepTask-daeddbc0-b5fc-badf-3534-4f8a393f63d6
    2019-07-18T13:38:23.985+0100 [INFO ] client.driver_mgr.docker: created container: driver=docker container_id=a75ebed822e8cc8818704af81402cdbde10fa1101a5415968a10d83c917a4750
    2019-07-18T13:38:30.896+0100 [INFO ] client.driver_mgr.docker: started container: driver=docker container_id=a75ebed822e8cc8818704af81402cdbde10fa1101a5415968a10d83c917a4750
    2019-07-18T13:38:30.897+0100 [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=C:\nomad-latest.exe args="[C:\nomad-latest.exe docker_logger]"
    2019-07-18T13:38:30.900+0100 [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=C:\nomad-latest.exe pid=2976
    2019-07-18T13:38:30.901+0100 [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=C:\nomad-latest.exe
    2019-07-18T13:38:31.237+0100 [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
    2019-07-18T13:38:31.237+0100 [DEBUG] client.driver_mgr.docker.docker_logger.nomad-latest.exe: plugin address: driver=docker @module=docker_logger address=127.0.0.1:10001 network=tcp timestamp=2019-07-18T13:38:31.237+0100
    2019-07-18T13:38:31.239+0100 [DEBUG] client.driver_mgr.docker.docker_logger.nomad-latest.exe: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2019-07-18T13:38:31.239+0100
    2019-07-18T13:38:31.239+0100 [ERROR] client.driver_mgr.docker.docker_logger.nomad-latest.exe: log streaming ended with terminal error: driver=docker @module=docker_logger error="open //./pipe/sleepTask-4f1fcd22.stdout: The system cannot find the file specified." timestamp=2019-07-18T13:38:31.239+0100
    2019-07-18T13:38:31.417+0100 [DEBUG] client: updated allocations: index=22 total=1 pulled=0 filtered=1
    2019-07-18T13:38:31.418+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-18T13:38:31.418+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-18T13:38:32.247+0100 [ERROR] client.alloc_runner.task_runner.task_hook.stats_hook: failed to start stats collection for task with unrecoverable error: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask error="container stopped"
    2019-07-18T13:38:32.258+0100 [DEBUG] client.driver_mgr.docker.docker_logger: plugin process exited: driver=docker path=C:\nomad-latest.exe pid=2976
    2019-07-18T13:38:32.258+0100 [DEBUG] client.driver_mgr.docker.docker_logger: plugin exited: driver=docker
    2019-07-18T13:38:32.261+0100 [DEBUG] client.driver_mgr.docker: image id reference count decremented: driver=docker image_id=sha256:22a423a5db36a909fa52de1e21581765ca7de8687036d412c438904a19d762d9 references=1
    2019-07-18T13:38:32.266+0100 [INFO ] client.alloc_runner.task_runner: not restarting task: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask reason="Policy allows no restarts"
    2019-07-18T13:38:32.266+0100 [INFO ] client.gc: marking allocation for GC: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6
    2019-07-18T13:38:32.410+0100 [DEBUG] client: updated allocations: index=23 total=1 pulled=0 filtered=1
    2019-07-18T13:38:32.411+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-18T13:38:32.412+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-18T13:38:34.268+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: timed out waiting for read-side of process output pipe to close: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask @module=logmon timestamp=2019-07-18T13:38:34.268+0100
    2019-07-18T13:38:34.275+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: timed out waiting for read-side of process output pipe to close: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask @module=logmon timestamp=2019-07-18T13:38:34.268+0100
    2019-07-18T13:38:34.283+0100 [ERROR] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: reading plugin stderr: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask error="read |0: file already closed"
    2019-07-18T13:38:34.283+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask path=C:\nomad-latest.exe pid=2528
    2019-07-18T13:38:34.285+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask
    2019-07-18T13:38:34.286+0100 [DEBUG] client.alloc_runner.task_runner: task run loop exiting: alloc_id=daeddbc0-b5fc-badf-3534-4f8a393f63d6 task=sleepTask
    2019-07-18T13:38:34.423+0100 [DEBUG] client: updated allocations: index=24 total=1 pulled=0 filtered=1
    2019-07-18T13:38:34.424+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-18T13:38:34.426+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-18T13:38:44.681+0100 [DEBUG] http: request complete: method=GET path=/v1/jobs?prefix=job49 duration=998.5µs
    2019-07-18T13:38:44.682+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49 duration=0s
    2019-07-18T13:38:44.691+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/allocations?all=false duration=986.3µs
    2019-07-18T13:38:44.693+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/evaluations duration=1.0041ms
    2019-07-18T13:38:44.709+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/deployment duration=0s
    2019-07-18T13:38:44.720+0100 [DEBUG] http: request complete: method=GET path=/v1/job/job49/summary duration=1.041ms
    2019-07-18T13:39:55.924+0100 [DEBUG] worker: dequeued evaluation: eval_id=e7106972-4a17-5b22-ae1f-a1c95469fd04
    2019-07-18T13:39:55.924+0100 [DEBUG] worker: dequeued evaluation: eval_id=0ebf5cce-97b1-54ed-f4b7-aa995ad517e8
    2019-07-18T13:39:55.927+0100 [DEBUG] core.sched: deployment GC scanning before cutoff index: index=0 deployment_gc_threshold=1h0m0s
    2019-07-18T13:39:55.925+0100 [DEBUG] core.sched: eval GC scanning before cutoff index: index=0 eval_gc_threshold=1h0m0s
    2019-07-18T13:39:55.929+0100 [DEBUG] worker: ack evaluation: eval_id=0ebf5cce-97b1-54ed-f4b7-aa995ad517e8
    2019-07-18T13:39:55.931+0100 [DEBUG] worker: ack evaluation: eval_id=e7106972-4a17-5b22-ae1f-a1c95469fd04
    2019-07-18T13:39:55.940+0100 [DEBUG] worker: dequeued evaluation: eval_id=4e2f515c-c4cb-99aa-53d4-5bfa70a29f0d
    2019-07-18T13:39:55.942+0100 [DEBUG] worker: dequeued evaluation: eval_id=7857b557-cdbe-e881-5f88-62e3fb9db529
    2019-07-18T13:39:55.949+0100 [DEBUG] core.sched: node GC scanning before cutoff index: index=0 node_gc_threshold=24h0m0s
    2019-07-18T13:39:55.958+0100 [DEBUG] core.sched: job GC scanning before cutoff index: index=0 job_gc_threshold=4h0m0s
    2019-07-18T13:39:55.960+0100 [DEBUG] worker: ack evaluation: eval_id=4e2f515c-c4cb-99aa-53d4-5bfa70a29f0d
    2019-07-18T13:39:55.970+0100 [DEBUG] worker: ack evaluation: eval_id=7857b557-cdbe-e881-5f88-62e3fb9db529
    2019-07-18T13:40:34.468+0100 [DEBUG] http: request complete: method=GET path=/v1/allocations?prefix=daeddbc0 duration=0s
    2019-07-18T13:40:34.470+0100 [DEBUG] http: request complete: method=GET path=/v1/allocation/daeddbc0-b5fc-badf-3534-4f8a393f63d6 duration=999.6µs
    2019-07-18T13:40:34.472+0100 [DEBUG] http: request complete: method=GET path=/v1/node/82c35105-5615-e3d4-415d-e96113d90f85 duration=998.1µs
    2019-07-18T13:40:34.474+0100 [DEBUG] http: request complete: method=GET path=/v1/client/fs/logs/daeddbc0-b5fc-badf-3534-4f8a393f63d6?follow=false&offset=0&origin=start&region=global&task=sleepTask&type=stdout duration=1.0013ms
    2019-07-18T13:43:35.228+0100 [DEBUG] client: updated allocations: index=24 total=1 pulled=0 filtered=1
    2019-07-18T13:43:35.228+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-18T13:43:35.228+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-18T13:44:55.931+0100 [DEBUG] worker: dequeued evaluation: eval_id=054f67e4-637c-4df2-d987-1ff79211b392
    2019-07-18T13:44:55.931+0100 [DEBUG] worker: dequeued evaluation: eval_id=504c9d8a-feed-a6c9-c6ac-8e8563754dbb
    2019-07-18T13:44:55.932+0100 [DEBUG] core.sched: node GC scanning before cutoff index: index=0 node_gc_threshold=24h0m0s
    2019-07-18T13:44:55.931+0100 [DEBUG] core.sched: deployment GC scanning before cutoff index: index=0 deployment_gc_threshold=1h0m0s
    2019-07-18T13:44:55.933+0100 [DEBUG] worker: ack evaluation: eval_id=054f67e4-637c-4df2-d987-1ff79211b392
    2019-07-18T13:44:55.932+0100 [DEBUG] worker: ack evaluation: eval_id=504c9d8a-feed-a6c9-c6ac-8e8563754dbb
    2019-07-18T13:44:55.934+0100 [DEBUG] worker: dequeued evaluation: eval_id=dc198882-0588-cb7d-97c0-efb0a70e2bd6
    2019-07-18T13:44:55.935+0100 [DEBUG] worker: dequeued evaluation: eval_id=f93fb0a1-ec23-2322-c2a2-ed487c635f7f
    2019-07-18T13:44:55.935+0100 [DEBUG] core.sched: eval GC scanning before cutoff index: index=0 eval_gc_threshold=1h0m0s
    2019-07-18T13:44:55.936+0100 [DEBUG] core.sched: job GC scanning before cutoff index: index=0 job_gc_threshold=4h0m0s
    2019-07-18T13:44:55.937+0100 [DEBUG] worker: ack evaluation: eval_id=dc198882-0588-cb7d-97c0-efb0a70e2bd6
    2019-07-18T13:44:55.938+0100 [DEBUG] worker: ack evaluation: eval_id=f93fb0a1-ec23-2322-c2a2-ed487c635f7f
    2019-07-18T13:48:37.116+0100 [DEBUG] client: updated allocations: index=24 total=1 pulled=0 filtered=1
    2019-07-18T13:48:37.117+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-18T13:48:37.119+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-18T13:49:55.938+0100 [DEBUG] worker: dequeued evaluation: eval_id=62a2723b-1b10-f80d-4680-b4ac6f721656
    2019-07-18T13:49:55.938+0100 [DEBUG] worker: dequeued evaluation: eval_id=c5235490-0e0b-ddc2-85c8-248fc7810369
    2019-07-18T13:49:55.939+0100 [DEBUG] core.sched: eval GC scanning before cutoff index: index=0 eval_gc_threshold=1h0m0s
    2019-07-18T13:49:55.941+0100 [DEBUG] core.sched: job GC scanning before cutoff index: index=0 job_gc_threshold=4h0m0s
    2019-07-18T13:49:55.944+0100 [DEBUG] worker: ack evaluation: eval_id=c5235490-0e0b-ddc2-85c8-248fc7810369
    2019-07-18T13:49:55.942+0100 [DEBUG] worker: ack evaluation: eval_id=62a2723b-1b10-f80d-4680-b4ac6f721656
    2019-07-18T13:49:55.947+0100 [DEBUG] worker: dequeued evaluation: eval_id=51bb9a5a-5b17-9700-d572-5cd452b175b1
    2019-07-18T13:49:55.960+0100 [DEBUG] core.sched: deployment GC scanning before cutoff index: index=0 deployment_gc_threshold=1h0m0s
    2019-07-18T13:49:55.952+0100 [DEBUG] worker: dequeued evaluation: eval_id=c06fcb0b-f261-a05d-580c-8bb194804945
    2019-07-18T13:49:55.965+0100 [DEBUG] worker: ack evaluation: eval_id=51bb9a5a-5b17-9700-d572-5cd452b175b1
    2019-07-18T13:49:55.967+0100 [DEBUG] core.sched: node GC scanning before cutoff index: index=0 node_gc_threshold=24h0m0s
    2019-07-18T13:49:55.980+0100 [DEBUG] worker: ack evaluation: eval_id=c06fcb0b-f261-a05d-580c-8bb194804945
    2019-07-18T13:53:22.633+0100 [DEBUG] http: request complete: method=GET path=/v1/allocations?prefix=daeddbc0 duration=0s
    2019-07-18T13:53:22.634+0100 [DEBUG] http: request complete: method=GET path=/v1/allocation/daeddbc0-b5fc-badf-3534-4f8a393f63d6 duration=0s
    2019-07-18T13:53:22.636+0100 [DEBUG] http: request complete: method=GET path=/v1/node/82c35105-5615-e3d4-415d-e96113d90f85 duration=0s
    2019-07-18T13:53:22.638+0100 [DEBUG] http: request complete: method=GET path=/v1/client/fs/logs/daeddbc0-b5fc-badf-3534-4f8a393f63d6?follow=false&offset=0&origin=start&region=global&task=sleepTask&type=stdout duration=0s
    2019-07-18T13:53:54.377+0100 [DEBUG] client: updated allocations: index=24 total=1 pulled=0 filtered=1
    2019-07-18T13:53:55.352+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-18T13:53:55.354+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-18T13:54:55.923+0100 [DEBUG] worker: dequeued evaluation: eval_id=663763ba-ec13-9675-19e9-114d44821463
    2019-07-18T13:54:55.923+0100 [DEBUG] worker: dequeued evaluation: eval_id=8fc4623c-a1de-a4c7-25d0-4c3181e0324a
    2019-07-18T13:54:55.933+0100 [DEBUG] core.sched: eval GC scanning before cutoff index: index=0 eval_gc_threshold=1h0m0s
    2019-07-18T13:54:55.932+0100 [DEBUG] core.sched: job GC scanning before cutoff index: index=0 job_gc_threshold=4h0m0s
    2019-07-18T13:54:55.934+0100 [DEBUG] worker: ack evaluation: eval_id=663763ba-ec13-9675-19e9-114d44821463
    2019-07-18T13:54:55.934+0100 [DEBUG] worker: ack evaluation: eval_id=8fc4623c-a1de-a4c7-25d0-4c3181e0324a
    2019-07-18T13:54:55.935+0100 [DEBUG] worker: dequeued evaluation: eval_id=8e882f00-d3b2-0ba5-b21c-4d5c458f7550
    2019-07-18T13:54:55.936+0100 [DEBUG] worker: dequeued evaluation: eval_id=7f725fa8-de7c-4cfe-7970-c1b930e9a5c7
    2019-07-18T13:54:55.937+0100 [DEBUG] core.sched: deployment GC scanning before cutoff index: index=0 deployment_gc_threshold=1h0m0s
    2019-07-18T13:54:55.936+0100 [DEBUG] core.sched: node GC scanning before cutoff index: index=0 node_gc_threshold=24h0m0s
    2019-07-18T13:54:55.938+0100 [DEBUG] worker: ack evaluation: eval_id=7f725fa8-de7c-4cfe-7970-c1b930e9a5c7
    2019-07-18T13:54:55.938+0100 [DEBUG] worker: ack evaluation: eval_id=8e882f00-d3b2-0ba5-b21c-4d5c458f7550
==> Caught signal: interrupt
    2019-07-18T13:55:33.538+0100 [DEBUG] http: shutting down http server
    2019-07-18T13:55:33.541+0100 [INFO ] agent: requesting shutdown
    2019-07-18T13:55:33.543+0100 [INFO ] client: shutting down
    2019-07-18T13:55:33.551+0100 [INFO ] client.plugin: shutting down plugin manager: plugin-type=device
    2019-07-18T13:55:33.551+0100 [INFO ] client.plugin: plugin manager finished: plugin-type=device
    2019-07-18T13:55:33.557+0100 [INFO ] client.plugin: shutting down plugin manager: plugin-type=driver
    2019-07-18T13:55:33.558+0100 [INFO ] client.plugin: plugin manager finished: plugin-type=driver
    2019-07-18T13:55:33.559+0100 [DEBUG] client.server_mgr: shutting down
    2019-07-18T13:55:33.559+0100 [INFO ] nomad: shutting down server
    2019-07-18T13:55:33.561+0100 [WARN ] nomad: serf: Shutdown without a Leave
    2019-07-18T13:55:33.568+0100 [INFO ] agent: shutdown complete
@ilyaiqoqo ilyaiqoqo changed the title windows+docker logs are empty windows docker logs are empty Jul 18, 2019
@ilyaiqoqo
Copy link
Contributor Author

This commit efda81c as part of introduced the bug as part of PR #5864,

@preetapan
Copy link
Contributor

Thanks for the report @ilyaiqoqo , we'll investigate and post an update

@endocrimes
Copy link
Contributor

endocrimes commented Jul 19, 2019

hey @ilyaiqoqo,

I've just tried to reproduce this on current master, on Windows Server 2019 with the same jobspec and received the expected logs:

image

I got the same result running nomad both as administrator and as a regular user. (The logs cli is running inside the WSL shell for convenience, but the agent is running in a regular Windows Powershell)

Based on the behavior you're seeing though, it looks like we're failing to stat the path we wish to use for logs for (I would guess permissions related?) reasons and failing to handle that correctly. I've just created the dani/logmon-fixes-windows branch that adds some logging for this case, if you could try to reproduce it on that branch, it should give me some pointers as to where to go from here.

Thanks!

@ilyaiqoqo
Copy link
Contributor Author

using dani/logmon-fixes-windows

2019-07-21T08:24:53.153+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: Failed to stat FIFO with unexpected error: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask error="FindFirstFile //./pipe/sleepTask-444cb706.stdout: There are no more files." @module=logmon timestamp=2019-07-21T08:24:53.152+0100

==> No configuration files loaded
==> Starting Nomad agent...
==> Nomad agent configuration:

       Advertise Addrs: HTTP: 127.0.0.1:4646; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
            Bind Addrs: HTTP: 127.0.0.1:4646; RPC: 127.0.0.1:4647; Serf: 127.0.0.1:4648
                Client: true
             Log Level: DEBUG
                Region: global (DC: dc1)
                Server: true
               Version: 0.9.4-dev

==> Nomad agent started! Log data will stream in below:

    2019-07-21T08:24:41.318+0100 [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=
    2019-07-21T08:24:41.318+0100 [DEBUG] agent.plugin_loader.docker: using client connection initialized from environment: plugin_dir=
    2019-07-21T08:24:41.319+0100 [INFO ] agent: detected plugin: name=docker type=driver plugin_version=0.1.0
    2019-07-21T08:24:41.319+0100 [INFO ] agent: detected plugin: name=mock_driver type=driver plugin_version=0.1.0
    2019-07-21T08:24:41.319+0100 [INFO ] agent: detected plugin: name=raw_exec type=driver plugin_version=0.1.0
    2019-07-21T08:24:41.319+0100 [INFO ] agent: detected plugin: name=exec type=driver plugin_version=0.1.0
    2019-07-21T08:24:41.319+0100 [INFO ] agent: detected plugin: name=qemu type=driver plugin_version=0.1.0
    2019-07-21T08:24:41.319+0100 [INFO ] agent: detected plugin: name=java type=driver plugin_version=0.1.0
    2019-07-21T08:24:41.320+0100 [INFO ] nomad: raft: Initial configuration (index=1): [{Suffrage:Voter ID:127.0.0.1:4647 Address:127.0.0.1:4647}]
    2019-07-21T08:24:41.320+0100 [INFO ] nomad: raft: Node at 127.0.0.1:4647 [Follower] entering Follower state (Leader: "")
    2019-07-21T08:24:41.322+0100 [INFO ] nomad: serf: EventMemberJoin: DESKTOP-7GC9VVB.global 127.0.0.1
    2019-07-21T08:24:41.322+0100 [INFO ] nomad: starting scheduling worker(s): num_workers=8 schedulers="[service batch system _core]"
    2019-07-21T08:24:41.322+0100 [INFO ] nomad: adding server: server="DESKTOP-7GC9VVB.global (Addr: 127.0.0.1:4647) (DC: dc1)"
    2019-07-21T08:24:41.322+0100 [DEBUG] nomad: lost contact with Nomad quorum, falling back to Consul for server list
    2019-07-21T08:24:41.322+0100 [INFO ] client: using state directory: state_dir=C:\Users\ilya\AppData\Local\Temp\NomadClient629908327
    2019-07-21T08:24:41.323+0100 [INFO ] client: using alloc directory: alloc_dir=C:\Users\ilya\AppData\Local\Temp\NomadClient539703706
    2019-07-21T08:24:41.385+0100 [DEBUG] client.fingerprint_mgr: built-in fingerprints: fingerprinters="[arch consul cpu host memory network nomad signal storage vault env_aws env_gce]"
    2019-07-21T08:24:42.324+0100 [ERROR] nomad: error looking up Nomad servers in Consul: error="server.nomad: unable to query Consul datacenters: Get http://127.0.0.1:8500/v1/catalog/datacenters: dial tcp 127.0.0.1:8500: connectex: No connection could be made because the target machine actively refused it."
    2019-07-21T08:24:42.397+0100 [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=consul period=15s
    2019-07-21T08:24:43.121+0100 [WARN ] nomad: raft: Heartbeat timeout from "" reached, starting election
    2019-07-21T08:24:43.121+0100 [INFO ] nomad: raft: Node at 127.0.0.1:4647 [Candidate] entering Candidate state in term 2
    2019-07-21T08:24:43.121+0100 [DEBUG] nomad: raft: Votes needed: 1
    2019-07-21T08:24:43.121+0100 [DEBUG] nomad: raft: Vote granted from 127.0.0.1:4647 in term 2. Tally: 1
    2019-07-21T08:24:43.121+0100 [INFO ] nomad: raft: Election won. Tally: 1
    2019-07-21T08:24:43.121+0100 [INFO ] nomad: raft: Node at 127.0.0.1:4647 [Leader] entering Leader state
    2019-07-21T08:24:43.122+0100 [INFO ] nomad: cluster leadership acquired
    2019-07-21T08:24:43.618+0100 [DEBUG] client.fingerprint_mgr.cpu: detected cpu frequency: MHz=2112
    2019-07-21T08:24:43.618+0100 [DEBUG] client.fingerprint_mgr.cpu: detected core count: cores=8
    2019-07-21T08:24:45.220+0100 [DEBUG] client.fingerprint_mgr.network: link speed detected: interface=WiFi mbits=780
    2019-07-21T08:24:45.229+0100 [DEBUG] client.fingerprint_mgr.network: detected interface IP: interface=WiFi IP=192.168.11.121
    2019-07-21T08:24:45.229+0100 [DEBUG] client.fingerprint_mgr: fingerprinting periodically: fingerprinter=vault period=15s
    2019-07-21T08:24:45.229+0100 [DEBUG] client.fingerprint_mgr.env_aws: error querying AWS Metadata URL, skipping
    2019-07-21T08:24:45.229+0100 [DEBUG] client.fingerprint_mgr.env_gce: could not read value for attribute: attribute=machine-type error="Get http://169.254.169.254/computeMetadata/v1/instance/machine-type: dial tcp 169.254.169.254:80: connectex: A socket operation was attempted to an unreachable network."
    2019-07-21T08:24:45.229+0100 [DEBUG] client.fingerprint_mgr.env_gce: error querying GCE Metadata URL, skipping
    2019-07-21T08:24:45.229+0100 [DEBUG] client.fingerprint_mgr: detected fingerprints: node_attrs="[arch cpu host network nomad signal storage]"
    2019-07-21T08:24:45.229+0100 [INFO ] client.plugin: starting plugin manager: plugin-type=driver
    2019-07-21T08:24:45.229+0100 [INFO ] client.plugin: starting plugin manager: plugin-type=device
    2019-07-21T08:24:45.229+0100 [DEBUG] client.device_mgr: exiting since there are no device plugins
    2019-07-21T08:24:45.229+0100 [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=driver
    2019-07-21T08:24:45.229+0100 [DEBUG] client.plugin: waiting on plugin manager initial fingerprint: plugin-type=device
    2019-07-21T08:24:45.229+0100 [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=device
    2019-07-21T08:24:45.229+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=mock_driver health=healthy description=Healthy
    2019-07-21T08:24:45.229+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=exec health=undetected description="exec driver unsupported on client OS"
    2019-07-21T08:24:45.229+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=raw_exec health=healthy description=Healthy
    2019-07-21T08:24:45.231+0100 [DEBUG] client.server_mgr: new server list: new_servers=127.0.0.1:4647,127.0.0.1:4647 old_servers=
    2019-07-21T08:24:45.235+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=qemu health=undetected description=
    2019-07-21T08:24:45.237+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=java health=undetected description=
    2019-07-21T08:24:45.237+0100 [DEBUG] client.driver_mgr: initial driver fingerprint: driver=docker health=healthy description=Healthy
    2019-07-21T08:24:45.237+0100 [DEBUG] client.driver_mgr: detected drivers: drivers="map[healthy:[mock_driver raw_exec docker] undetected:[exec qemu java]]"
    2019-07-21T08:24:45.237+0100 [DEBUG] client.plugin: finished plugin manager initial fingerprint: plugin-type=driver
    2019-07-21T08:24:45.238+0100 [INFO ] client: started client: node_id=f77c7371-eac4-f189-c22c-1034d2ed1417
    2019-07-21T08:24:45.238+0100 [DEBUG] client: updated allocations: index=1 total=0 pulled=0 filtered=0
    2019-07-21T08:24:45.238+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=0
    2019-07-21T08:24:45.238+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=0 errors=0
    2019-07-21T08:24:45.239+0100 [INFO ] client: node registration complete
    2019-07-21T08:24:45.240+0100 [DEBUG] client: state updated: node_status=ready
    2019-07-21T08:24:45.240+0100 [DEBUG] client.server_mgr: new server list: new_servers=127.0.0.1:4647 old_servers=127.0.0.1:4647,127.0.0.1:4647
    2019-07-21T08:24:46.240+0100 [DEBUG] client: state changed, updating node and re-registering
    2019-07-21T08:24:46.240+0100 [INFO ] client: node registration complete
    2019-07-21T08:24:52.787+0100 [DEBUG] worker: dequeued evaluation: eval_id=af5839e3-fce2-4f68-7fd7-61920695c252
    2019-07-21T08:24:52.787+0100 [DEBUG] http: request complete: method=PUT path=/v1/jobs?region=global duration=1.9995ms
    2019-07-21T08:24:52.788+0100 [DEBUG] worker.batch_sched: reconciled current state with desired state: eval_id=af5839e3-fce2-4f68-7fd7-61920695c252 job_id=job49 namespace=default results="Total changes: (place 1) (destructive 0) (inplace 0) (stop 0)
Desired Changes for "epictaskgroup1": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)"
    2019-07-21T08:24:52.790+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/af5839e3-fce2-4f68-7fd7-61920695c252?region=global duration=0s
    2019-07-21T08:24:52.791+0100 [DEBUG] worker: submitted plan for evaluation: eval_id=af5839e3-fce2-4f68-7fd7-61920695c252
    2019-07-21T08:24:52.791+0100 [DEBUG] worker.batch_sched: setting eval status: eval_id=af5839e3-fce2-4f68-7fd7-61920695c252 job_id=job49 namespace=default status=complete
    2019-07-21T08:24:52.792+0100 [DEBUG] client: updated allocations: index=11 total=1 pulled=1 filtered=0
    2019-07-21T08:24:52.793+0100 [DEBUG] worker: updated evaluation: eval="<Eval "af5839e3-fce2-4f68-7fd7-61920695c252" JobID: "job49" Namespace: "default">"
    2019-07-21T08:24:52.793+0100 [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=0
    2019-07-21T08:24:52.794+0100 [DEBUG] worker: ack evaluation: eval_id=af5839e3-fce2-4f68-7fd7-61920695c252
    2019-07-21T08:24:52.795+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/af5839e3-fce2-4f68-7fd7-61920695c252/allocations?region=global duration=0s
    2019-07-21T08:24:52.813+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask path=C:\nomad-latest.exe args="[C:\nomad-latest.exe logmon]"
    2019-07-21T08:24:52.795+0100 [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0
    2019-07-21T08:24:52.819+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask path=C:\nomad-latest.exe pid=9720
    2019-07-21T08:24:52.826+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask path=C:\nomad-latest.exe
    2019-07-21T08:24:52.889+0100 [DEBUG] client: updated allocations: index=13 total=1 pulled=0 filtered=1
    2019-07-21T08:24:52.890+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-21T08:24:52.890+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-21T08:24:53.141+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: plugin address: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask network=tcp @module=logmon address=127.0.0.1:10000 timestamp=2019-07-21T08:24:53.141+0100
    2019-07-21T08:24:53.141+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask version=2
    2019-07-21T08:24:53.152+0100 [INFO ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: opening fifo: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask @module=logmon path=//./pipe/sleepTask-444cb706.stdout timestamp=2019-07-21T08:24:53.152+0100
    2019-07-21T08:24:53.153+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: Failed to stat FIFO with unexpected error: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask error="FindFirstFile //./pipe/sleepTask-444cb706.stdout: There are no more files." @module=logmon timestamp=2019-07-21T08:24:53.152+0100
    2019-07-21T08:24:53.160+0100 [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=python:3-windowsservercore image_id=sha256:22a423a5db36a909fa52de1e21581765ca7de8687036d412c438904a19d762d9 references=1
    2019-07-21T08:24:53.161+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: failed to open fifo: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask error="failed to open fifo listener: open //./pipe/sleepTask-444cb706.stdout: The system cannot find the file specified." @module=logmon timestamp=2019-07-21T08:24:53.152+0100
    2019-07-21T08:24:53.170+0100 [INFO ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: opening fifo: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask @module=logmon path=//./pipe/sleepTask-444cb706.stderr timestamp=2019-07-21T08:24:53.152+0100
    2019-07-21T08:24:53.162+0100 [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=sleepTask memory=314572800 cpu_shares=100 cpu_quota=0 cpu_period=0
    2019-07-21T08:24:53.172+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: Failed to stat FIFO with unexpected error: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask @module=logmon error="FindFirstFile //./pipe/sleepTask-444cb706.stderr: There are no more files." timestamp=2019-07-21T08:24:53.152+0100
    2019-07-21T08:24:53.180+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: failed to open fifo: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask @module=logmon error="failed to open fifo listener: open //./pipe/sleepTask-444cb706.stderr: The system cannot find the file specified." timestamp=2019-07-21T08:24:53.153+0100
    2019-07-21T08:24:53.173+0100 [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=sleepTask binds="[]string{"C:\\Users\\ilya\\AppData\\Local\\Temp\\NomadClient539703706\\2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f\\alloc:c:\\alloc", "C:\\Users\\ilya\\AppData\\Local\\Temp\\NomadClient539703706\\2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f\\sleepTask\\local:c:\\local", "C:\\Users\\ilya\\AppData\\Local\\Temp\\NomadClient539703706\\2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f\\sleepTask\\secrets:c:\\secrets"}"
    2019-07-21T08:24:53.190+0100 [DEBUG] client.driver_mgr.docker: networking mode not specified; using default: driver=docker task_name=sleepTask network_mode=nat
    2019-07-21T08:24:53.190+0100 [DEBUG] client.driver_mgr.docker: no network interfaces are available: driver=docker task_name=sleepTask
    2019-07-21T08:24:53.191+0100 [DEBUG] client.driver_mgr.docker: setting container startup command: driver=docker task_name=sleepTask command="python --version"
    2019-07-21T08:24:53.191+0100 [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=sleepTask container_name=sleepTask-2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f
    2019-07-21T08:24:53.230+0100 [INFO ] client.driver_mgr.docker: created container: driver=docker container_id=bc7363de8adbd6bbb5a4290d595c3f1713a1e6588dd185212ce9d62d37b0372e
    2019-07-21T08:24:53.831+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/af5839e3-fce2-4f68-7fd7-61920695c252?region=global duration=0s
    2019-07-21T08:24:53.834+0100 [DEBUG] http: request complete: method=GET path=/v1/evaluation/af5839e3-fce2-4f68-7fd7-61920695c252/allocations?region=global duration=992.7µs
    2019-07-21T08:24:58.007+0100 [INFO ] client.driver_mgr.docker: started container: driver=docker container_id=bc7363de8adbd6bbb5a4290d595c3f1713a1e6588dd185212ce9d62d37b0372e
    2019-07-21T08:24:58.007+0100 [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=C:\nomad-latest.exe args="[C:\nomad-latest.exe docker_logger]"
    2019-07-21T08:24:58.012+0100 [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=C:\nomad-latest.exe pid=5812
    2019-07-21T08:24:58.012+0100 [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=C:\nomad-latest.exe
    2019-07-21T08:24:58.314+0100 [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
    2019-07-21T08:24:58.314+0100 [DEBUG] client.driver_mgr.docker.docker_logger.nomad-latest.exe: plugin address: driver=docker address=127.0.0.1:10001 network=tcp @module=docker_logger timestamp=2019-07-21T08:24:58.314+0100
    2019-07-21T08:24:58.315+0100 [DEBUG] client.driver_mgr.docker.docker_logger.nomad-latest.exe: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2019-07-21T08:24:58.315+0100
    2019-07-21T08:24:58.315+0100 [ERROR] client.driver_mgr.docker.docker_logger.nomad-latest.exe: log streaming ended with terminal error: driver=docker @module=docker_logger error="open //./pipe/sleepTask-444cb706.stdout: The system cannot find the file specified." timestamp=2019-07-21T08:24:58.315+0100
    2019-07-21T08:24:58.489+0100 [DEBUG] client: updated allocations: index=14 total=1 pulled=0 filtered=1
    2019-07-21T08:24:58.489+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-21T08:24:58.489+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-21T08:24:59.238+0100 [ERROR] client.alloc_runner.task_runner.task_hook.stats_hook: failed to start stats collection for task with unrecoverable error: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask error="container stopped"
    2019-07-21T08:24:59.249+0100 [DEBUG] client.driver_mgr.docker.docker_logger: plugin process exited: driver=docker path=C:\nomad-latest.exe pid=5812
    2019-07-21T08:24:59.249+0100 [DEBUG] client.driver_mgr.docker.docker_logger: plugin exited: driver=docker
    2019-07-21T08:24:59.252+0100 [DEBUG] client.driver_mgr.docker: image id reference count decremented: driver=docker image_id=sha256:22a423a5db36a909fa52de1e21581765ca7de8687036d412c438904a19d762d9 references=1
    2019-07-21T08:24:59.252+0100 [INFO ] client.alloc_runner.task_runner: not restarting task: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask reason="Policy allows no restarts"
    2019-07-21T08:24:59.252+0100 [INFO ] client.gc: marking allocation for GC: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f
    2019-07-21T08:24:59.502+0100 [DEBUG] client: updated allocations: index=15 total=1 pulled=0 filtered=1
    2019-07-21T08:24:59.504+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-21T08:24:59.504+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2019-07-21T08:25:01.259+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: timed out waiting for read-side of process output pipe to close: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask @module=logmon timestamp=2019-07-21T08:25:01.259+0100
    2019-07-21T08:25:01.269+0100 [WARN ] client.alloc_runner.task_runner.task_hook.logmon.nomad-latest.exe: timed out waiting for read-side of process output pipe to close: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask @module=logmon timestamp=2019-07-21T08:25:01.259+0100
    2019-07-21T08:25:01.275+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask path=C:\nomad-latest.exe pid=9720
    2019-07-21T08:25:01.275+0100 [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask
    2019-07-21T08:25:01.276+0100 [DEBUG] client.alloc_runner.task_runner: task run loop exiting: alloc_id=2ab4d80c-e5b8-fd1a-7a2d-9e4ed3917f4f task=sleepTask
    2019-07-21T08:25:01.500+0100 [DEBUG] client: updated allocations: index=16 total=1 pulled=0 filtered=1
    2019-07-21T08:25:01.502+0100 [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2019-07-21T08:25:01.502+0100 [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0

@endocrimes
Copy link
Contributor

@ilyaiqoqo Hey! - So it looks like this bug is caused by a regression in go1.11.x (related to golang/go#29119), but is fixed in go1.12, could you rebuild with go1.12 to confirm?

@ilyaiqoqo
Copy link
Contributor Author

@endocrimes compiling with go1.12.7 fixed the error
thanks :)

@endocrimes
Copy link
Contributor

@ilyaiqoqo thanks! - I just opened #5990 as a workaround for the actual 0.9.4 release as we can't upgrade minor go versions in a patch release. Thanks for helping to confirm the underlying issue though.

Have a great day 👋

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Nov 20, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants