Skip to content
Snippets Groups Projects
  1. Jul 05, 2021
  2. Jul 04, 2021
  3. Jul 03, 2021
  4. Jul 02, 2021
  5. Jul 01, 2021
  6. Jun 30, 2021
    • Peihong Liu's avatar
      refine pow module and its test (#5319) · da69f6b7
      Peihong Liu authored
      
      * refine pow module and its test
      
      * simplify test & add scalar-pow backward test
      
      * refine pow module doc
      
      Co-authored-by: default avatarYao Chi <later@usopp.net>
      Co-authored-by: default avataroneflow-ci-bot <69100618+oneflow-ci-bot@users.noreply.github.com>
      da69f6b7
    • Juncheng's avatar
    • Zhiqiu(Oscar) Xu's avatar
      Fix optimizer for not supporting all kinds of iterables (#5355) · ed82d1da
      Zhiqiu(Oscar) Xu authored
      
      * added flatten backward
      
      * flatten and softmax backward
      
      * fix bug for not supporting all kinds of iterables in optimizers
      
      Co-authored-by: default avataroneflow-ci-bot <69100618+oneflow-ci-bot@users.noreply.github.com>
      ed82d1da
    • Houjiang Chen's avatar
    • Xiaoyu Zhang's avatar
    • Yinggang Wang's avatar
      Feat graph autograd engine (#5296) · a6430075
      Yinggang Wang authored
      
      * feat(GraphAutogradEngine): add GraphAutogradEngine
      
      * feat(GraphEngine): support backward interface
      
      * feat(GraphEngine): support autograd.backward()
      
      * fix(GraphEngine): some grad_fn in next_functions but can not apply
      
      * feat(GraphEngine): support autograd.grad interface
      
      * style(*): add JUST
      
      * fix(GraphEngine): fix autograd.grad bugs
      
      * test(Autograd): add flow.autograd test
      
      * fix(GraphEngine): fix autograd.grad bug
      
      * refine codes
      
      * style(*): remove comments
      
      Co-authored-by: default avataroneflow-ci-bot <69100618+oneflow-ci-bot@users.noreply.github.com>
      a6430075
    • Houjiang Chen's avatar
      [Functional] Part7: Migrate pooling ops (#5253) · 35549549
      Houjiang Chen authored
      
      * Dev functional interface.
      
      * Remove repeated function_traits
      
      * Implement add and add_scalar static functional op.
      
      * Remove unused code
      
      * Refactor
      
      * Refine
      
      * Generate and export functional apis.
      
      * Refine
      
      * Refine
      
      * Refine
      
      * Generate functional api and pybind cpp when building the project.
      
      * Refine code style and implement normalization functor.
      
      * Fix cmake
      
      * Add PyYAML requirement.
      
      * Add JUST
      
      * Fix scalar IsIntegral and IsSigned
      
      * Add scalar add grad func.
      
      * Fix norm grad func to support dynamic attrs.
      
      * Refactor math modules
      
      * Refactor activation modules.
      
      * Support DataType.
      
      * Add functional range.
      
      * Support DataType.
      
      * Add functional argmax and flatten.
      
      * Add functional argwhere.
      
      * Add functional broadcast_like
      
      * Add functional cast, zeros_like and ones_like.
      
      * Recursive determine input tensors.
      
      * Add functional concat, bias_add and conv2d.
      
      * Recursive determine input tensors.
      
      * Add functional conv2d, bias_add, eq, exp, expand, gather, dim_gather, greater, less, matmul and broadcast_matmul.
      
      * Fix scalar_div_by_tensor grad.
      
      * Update generate_functional_api.py
      
      * Update generate_functional_api.py
      
      * Add functional activation, argmax, eq, layer norm etc.
      
      * Fix dynamic sparse_softmax_cross_entropy.
      
      * Add functional expand_dims and where.
      
      * Fix conversion from python object to dtype.
      
      * Fix conversion from python object to dtype.
      
      * Access device_infer_func dynamically since this func maybe has not been registered while the op expr was constructed.
      
      * Add more functional apis, and fix bugs.
      
      * Fix crash since function has been cast with wrong function signature, the converted function behavior is undefined.
      
      * Check and throw error.
      
      * Refine
      
      * Use Maybe instead of throwing exception directly.
      
      * Reformat
      
      * Fix conv module, refactor bias_add grad func to support dynamic attrs.
      
      * Use composed attrs when creating kernel state.
      
      * Enable static conv op.
      
      * Update generated functional files only when different.
      
      * Use default target other than custom command.
      
      * Create new kernel state if in eager mode
      
      * make conv_kernels state less
      
      * fix typo
      
      * add conv grad functor; make conv_gpu_kernel stateless
      
      * Remove unused code.
      
      * Add partial unary and math functional apis.
      
      * Revert elementwise pow.
      
      * auto format by CI
      
      * move conv grad functor to grad_functor; add poolNdGrad functor, fix pool in gradient_funcs to use functional api
      
      * Revert "Access device_infer_func dynamically since this func maybe has not been registered while the op expr was constructed."
      
      This reverts commit 07cc4a59dc62ee5bf885f3c0402cfdc3a2674721.
      
      * Lazy construct functor to make sure that the operators has already been registered.
      
      * fix minor bug in pool grad
      
      * make pool_gpu_kernel stateless
      
      * Refine function library
      
      * Refine code style.
      
      * refactor code
      
      * Support add with large number of inputs.
      
      * Support concat with any number of inputs.
      
      * Support add with large number of inputs.
      
      * Update oneflow/python/nn/modules/math_ops.py
      
      Co-authored-by: default avatarYinggang Wang <wyg19970408@gmail.com>
      
      * Refine
      
      * Reformat
      
      * Refine code style.
      
      * make constant_kernel stateless (#5242)
      
      * Migrate binary and activation ops.
      
      * Migrate array ops.
      
      * Add or refactor activation grad funcs.
      
      * Add or refactor activation grad funcs.
      
      * Revert maxpool test.
      
      * Revert unpack all
      
      * Fix masked fill
      
      * Refine
      
      * Add nn ops.
      
      * Refine
      
      * Refine
      
      * Migrate conv op
      
      * Revert changes.
      
      * Refine code style
      
      * Fix pooling kernel
      
      * auto format by CI
      
      Co-authored-by: default avatarVertexC <bob2420083992@gmail.com>
      Co-authored-by: default avataroneflow-ci-bot <ci-bot@oneflow.org>
      Co-authored-by: default avatarYinggang Wang <wyg19970408@gmail.com>
      Co-authored-by: default avatarLuyang <flowingsun007@163.com>
      Co-authored-by: default avataroneflow-ci-bot <69100618+oneflow-ci-bot@users.noreply.github.com>
      35549549
    • Shenghang Tsai's avatar
      Add BUILD_BYPRODUCTS for ExternalProject_Add (#5316) · c85b557e
      Shenghang Tsai authored
      
      * Add BUILD_BYPRODUCTS for ExternalProject_Add
      
      * build in one step if it is ninja
      
      * fix yml syntax
      
      * limit max-parallel
      
      * limit max-parallel
      
      * rm useless
      
      * refine
      
      * add --shm-size=8g
      
      Co-authored-by: default avataroneflow-ci-bot <69100618+oneflow-ci-bot@users.noreply.github.com>
      c85b557e